Test Report: Docker_Linux_containerd_arm64 22101

                    
                      e65f928d8ebd0537e3fd5f2753f43f3d5796d0a1:2025-12-12:42734
                    
                

Test fail (34/417)

Order failed test Duration
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 500.99
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 368.59
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 2.22
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 2.31
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 2.33
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 735.67
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 2.11
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 0.06
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 1.72
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 3.09
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 2.48
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 241.62
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 1.38
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.56
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 0.12
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 98.73
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 0.06
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.27
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.26
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.3
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.26
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.27
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 2.43
358 TestKubernetesUpgrade 805.54
426 TestStartStop/group/no-preload/serial/FirstStart 512.34
437 TestStartStop/group/newest-cni/serial/FirstStart 502.09
438 TestStartStop/group/no-preload/serial/DeployApp 3.04
439 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 101.55
442 TestStartStop/group/no-preload/serial/SecondStart 371.01
444 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 97.13
447 TestStartStop/group/newest-cni/serial/SecondStart 374.31
448 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 542.52
452 TestStartStop/group/newest-cni/serial/Pause 9.89
479 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 269.84
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (500.99s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-767012 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1212 00:08:41.615176    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:09:09.326024    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:10:57.046521    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:10:57.052979    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:10:57.064381    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:10:57.085847    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:10:57.127321    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:10:57.208819    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:10:57.370420    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:10:57.692195    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:10:58.334403    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:10:59.615809    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:11:02.177914    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:11:07.299378    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:11:17.541536    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:11:38.022923    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:12:18.984436    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:13:40.908850    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:13:41.615080    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-767012 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m19.450729647s)

                                                
                                                
-- stdout --
	* [functional-767012] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22101
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-767012" primary control-plane node in "functional-767012" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Found network options:
	  - HTTP_PROXY=localhost:36001
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:36001 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-767012 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-767012 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001247859s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000289392s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000289392s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-767012 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-767012
helpers_test.go:244: (dbg) docker inspect functional-767012:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e",
	        "Created": "2025-12-12T00:06:52.261765556Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42951,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T00:06:52.317917194Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/hostname",
	        "HostsPath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/hosts",
	        "LogPath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e-json.log",
	        "Name": "/functional-767012",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-767012:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-767012",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e",
	                "LowerDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70-init/diff:/var/lib/docker/overlay2/4c0e5370e4fd7b4e6c6a79620ef377d7d55826709cd277e0cfa49c6005af0314/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-767012",
	                "Source": "/var/lib/docker/volumes/functional-767012/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-767012",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-767012",
	                "name.minikube.sigs.k8s.io": "functional-767012",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e781257da3adf1d3284ab2a6de0168c3db7957f25a7e53d0015250294193762d",
	            "SandboxKey": "/var/run/docker/netns/e781257da3ad",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-767012": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:4d:78:ba:7d:83",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "83467cc4cb13818b98ec0d7cb5fc0064ea6eb2c8db4256a8a81330921aa2d9a4",
	                    "EndpointID": "b787b732d8d748776ceeb6e65fab51cc1e79758446bc85ac20043b35593fab12",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-767012",
	                        "6585a82fe5e6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-767012 -n functional-767012
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-767012 -n functional-767012: exit status 6 (308.803102ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 00:15:06.929661   48050 status.go:458] kubeconfig endpoint: get endpoint: "functional-767012" does not appear in /home/jenkins/minikube-integration/22101-2343/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-095481 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                        │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ ssh            │ functional-095481 ssh sudo cat /etc/ssl/certs/42902.pem                                                                                                         │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ ssh            │ functional-095481 ssh sudo cat /usr/share/ca-certificates/42902.pem                                                                                             │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image          │ functional-095481 image ls                                                                                                                                      │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ ssh            │ functional-095481 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image          │ functional-095481 image load --daemon kicbase/echo-server:functional-095481 --alsologtostderr                                                                   │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image          │ functional-095481 image ls                                                                                                                                      │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image          │ functional-095481 image save kicbase/echo-server:functional-095481 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ update-context │ functional-095481 update-context --alsologtostderr -v=2                                                                                                         │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image          │ functional-095481 image rm kicbase/echo-server:functional-095481 --alsologtostderr                                                                              │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ update-context │ functional-095481 update-context --alsologtostderr -v=2                                                                                                         │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ update-context │ functional-095481 update-context --alsologtostderr -v=2                                                                                                         │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image          │ functional-095481 image ls                                                                                                                                      │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image          │ functional-095481 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image          │ functional-095481 image ls                                                                                                                                      │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image          │ functional-095481 image save --daemon kicbase/echo-server:functional-095481 --alsologtostderr                                                                   │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image          │ functional-095481 image ls --format yaml --alsologtostderr                                                                                                      │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image          │ functional-095481 image ls --format short --alsologtostderr                                                                                                     │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image          │ functional-095481 image ls --format table --alsologtostderr                                                                                                     │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image          │ functional-095481 image ls --format json --alsologtostderr                                                                                                      │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ ssh            │ functional-095481 ssh pgrep buildkitd                                                                                                                           │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │                     │
	│ image          │ functional-095481 image build -t localhost/my-image:functional-095481 testdata/build --alsologtostderr                                                          │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image          │ functional-095481 image ls                                                                                                                                      │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ delete         │ -p functional-095481                                                                                                                                            │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ start          │ -p functional-767012 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0         │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:06:47
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:06:47.197153   42564 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:06:47.197256   42564 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:06:47.197260   42564 out.go:374] Setting ErrFile to fd 2...
	I1212 00:06:47.197263   42564 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:06:47.197525   42564 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	I1212 00:06:47.197938   42564 out.go:368] Setting JSON to false
	I1212 00:06:47.198741   42564 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2954,"bootTime":1765495054,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1212 00:06:47.198800   42564 start.go:143] virtualization:  
	I1212 00:06:47.203474   42564 out.go:179] * [functional-767012] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 00:06:47.207157   42564 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:06:47.207225   42564 notify.go:221] Checking for updates...
	I1212 00:06:47.214266   42564 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:06:47.219645   42564 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 00:06:47.222613   42564 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	I1212 00:06:47.225562   42564 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:06:47.228426   42564 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:06:47.231533   42564 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:06:47.257845   42564 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 00:06:47.257959   42564 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:06:47.326960   42564 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-12 00:06:47.315772994 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:06:47.327104   42564 docker.go:319] overlay module found
	I1212 00:06:47.330432   42564 out.go:179] * Using the docker driver based on user configuration
	I1212 00:06:47.333260   42564 start.go:309] selected driver: docker
	I1212 00:06:47.333269   42564 start.go:927] validating driver "docker" against <nil>
	I1212 00:06:47.333281   42564 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:06:47.333990   42564 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:06:47.395640   42564 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-12 00:06:47.386140918 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:06:47.395781   42564 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 00:06:47.395996   42564 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:06:47.398838   42564 out.go:179] * Using Docker driver with root privileges
	I1212 00:06:47.401888   42564 cni.go:84] Creating CNI manager for ""
	I1212 00:06:47.401972   42564 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 00:06:47.401982   42564 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 00:06:47.402086   42564 start.go:353] cluster config:
	{Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:06:47.405203   42564 out.go:179] * Starting "functional-767012" primary control-plane node in "functional-767012" cluster
	I1212 00:06:47.408001   42564 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1212 00:06:47.411027   42564 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 00:06:47.413977   42564 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 00:06:47.414019   42564 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1212 00:06:47.414028   42564 cache.go:65] Caching tarball of preloaded images
	I1212 00:06:47.414028   42564 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 00:06:47.414129   42564 preload.go:238] Found /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1212 00:06:47.414138   42564 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1212 00:06:47.414519   42564 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/config.json ...
	I1212 00:06:47.414537   42564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/config.json: {Name:mk5167fad948f74f480c4d53e31ac2b2252b3057 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:06:47.434448   42564 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 00:06:47.434459   42564 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 00:06:47.434478   42564 cache.go:243] Successfully downloaded all kic artifacts
	I1212 00:06:47.434508   42564 start.go:360] acquireMachinesLock for functional-767012: {Name:mk41cf89e93a3830367886ebbef2bb8f6e99e3f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:06:47.434619   42564 start.go:364] duration metric: took 97.502µs to acquireMachinesLock for "functional-767012"
	I1212 00:06:47.434645   42564 start.go:93] Provisioning new machine with config: &{Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1212 00:06:47.434754   42564 start.go:125] createHost starting for "" (driver="docker")
	I1212 00:06:47.439881   42564 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1212 00:06:47.440152   42564 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:36001 to docker env.
	I1212 00:06:47.440177   42564 start.go:159] libmachine.API.Create for "functional-767012" (driver="docker")
	I1212 00:06:47.440203   42564 client.go:173] LocalClient.Create starting
	I1212 00:06:47.440269   42564 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem
	I1212 00:06:47.440302   42564 main.go:143] libmachine: Decoding PEM data...
	I1212 00:06:47.440321   42564 main.go:143] libmachine: Parsing certificate...
	I1212 00:06:47.440370   42564 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem
	I1212 00:06:47.440385   42564 main.go:143] libmachine: Decoding PEM data...
	I1212 00:06:47.440396   42564 main.go:143] libmachine: Parsing certificate...
	I1212 00:06:47.440748   42564 cli_runner.go:164] Run: docker network inspect functional-767012 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 00:06:47.456064   42564 cli_runner.go:211] docker network inspect functional-767012 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 00:06:47.456156   42564 network_create.go:284] running [docker network inspect functional-767012] to gather additional debugging logs...
	I1212 00:06:47.456172   42564 cli_runner.go:164] Run: docker network inspect functional-767012
	W1212 00:06:47.471138   42564 cli_runner.go:211] docker network inspect functional-767012 returned with exit code 1
	I1212 00:06:47.471155   42564 network_create.go:287] error running [docker network inspect functional-767012]: docker network inspect functional-767012: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-767012 not found
	I1212 00:06:47.471167   42564 network_create.go:289] output of [docker network inspect functional-767012]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-767012 not found
	
	** /stderr **
	I1212 00:06:47.471269   42564 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:06:47.487406   42564 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018befc0}
	I1212 00:06:47.487438   42564 network_create.go:124] attempt to create docker network functional-767012 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1212 00:06:47.487491   42564 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-767012 functional-767012
	I1212 00:06:47.544730   42564 network_create.go:108] docker network functional-767012 192.168.49.0/24 created
	I1212 00:06:47.544751   42564 kic.go:121] calculated static IP "192.168.49.2" for the "functional-767012" container
	I1212 00:06:47.544833   42564 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 00:06:47.559700   42564 cli_runner.go:164] Run: docker volume create functional-767012 --label name.minikube.sigs.k8s.io=functional-767012 --label created_by.minikube.sigs.k8s.io=true
	I1212 00:06:47.576753   42564 oci.go:103] Successfully created a docker volume functional-767012
	I1212 00:06:47.576841   42564 cli_runner.go:164] Run: docker run --rm --name functional-767012-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-767012 --entrypoint /usr/bin/test -v functional-767012:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1212 00:06:48.111083   42564 oci.go:107] Successfully prepared a docker volume functional-767012
	I1212 00:06:48.111148   42564 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 00:06:48.111156   42564 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 00:06:48.111231   42564 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-767012:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 00:06:52.189648   42564 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-767012:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.078385728s)
	I1212 00:06:52.189669   42564 kic.go:203] duration metric: took 4.078509634s to extract preloaded images to volume ...
	W1212 00:06:52.189848   42564 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1212 00:06:52.189992   42564 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 00:06:52.247289   42564 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-767012 --name functional-767012 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-767012 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-767012 --network functional-767012 --ip 192.168.49.2 --volume functional-767012:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1212 00:06:52.543181   42564 cli_runner.go:164] Run: docker container inspect functional-767012 --format={{.State.Running}}
	I1212 00:06:52.566334   42564 cli_runner.go:164] Run: docker container inspect functional-767012 --format={{.State.Status}}
	I1212 00:06:52.586972   42564 cli_runner.go:164] Run: docker exec functional-767012 stat /var/lib/dpkg/alternatives/iptables
	I1212 00:06:52.639202   42564 oci.go:144] the created container "functional-767012" has a running status.
	I1212 00:06:52.639220   42564 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa...
	I1212 00:06:52.963511   42564 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 00:06:52.991172   42564 cli_runner.go:164] Run: docker container inspect functional-767012 --format={{.State.Status}}
	I1212 00:06:53.025999   42564 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 00:06:53.026023   42564 kic_runner.go:114] Args: [docker exec --privileged functional-767012 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 00:06:53.103766   42564 cli_runner.go:164] Run: docker container inspect functional-767012 --format={{.State.Status}}
	I1212 00:06:53.123815   42564 machine.go:94] provisionDockerMachine start ...
	I1212 00:06:53.123889   42564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:06:53.141545   42564 main.go:143] libmachine: Using SSH client type: native
	I1212 00:06:53.141870   42564 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 00:06:53.141877   42564 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 00:06:53.142490   42564 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36058->127.0.0.1:32788: read: connection reset by peer
	I1212 00:06:56.290429   42564 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-767012
	
	I1212 00:06:56.290443   42564 ubuntu.go:182] provisioning hostname "functional-767012"
	I1212 00:06:56.290506   42564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:06:56.307103   42564 main.go:143] libmachine: Using SSH client type: native
	I1212 00:06:56.307406   42564 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 00:06:56.307417   42564 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-767012 && echo "functional-767012" | sudo tee /etc/hostname
	I1212 00:06:56.464023   42564 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-767012
	
	I1212 00:06:56.464090   42564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:06:56.481507   42564 main.go:143] libmachine: Using SSH client type: native
	I1212 00:06:56.481803   42564 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 00:06:56.481816   42564 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-767012' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-767012/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-767012' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:06:56.631075   42564 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:06:56.631100   42564 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-2343/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-2343/.minikube}
	I1212 00:06:56.631126   42564 ubuntu.go:190] setting up certificates
	I1212 00:06:56.631133   42564 provision.go:84] configureAuth start
	I1212 00:06:56.631191   42564 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-767012
	I1212 00:06:56.647644   42564 provision.go:143] copyHostCerts
	I1212 00:06:56.647701   42564 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem, removing ...
	I1212 00:06:56.647715   42564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem
	I1212 00:06:56.647794   42564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem (1082 bytes)
	I1212 00:06:56.647879   42564 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem, removing ...
	I1212 00:06:56.647883   42564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem
	I1212 00:06:56.647907   42564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem (1123 bytes)
	I1212 00:06:56.647956   42564 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem, removing ...
	I1212 00:06:56.647960   42564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem
	I1212 00:06:56.647983   42564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem (1675 bytes)
	I1212 00:06:56.648026   42564 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem org=jenkins.functional-767012 san=[127.0.0.1 192.168.49.2 functional-767012 localhost minikube]
	I1212 00:06:56.826531   42564 provision.go:177] copyRemoteCerts
	I1212 00:06:56.826589   42564 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:06:56.826628   42564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:06:56.843384   42564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:06:56.946302   42564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 00:06:56.962548   42564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 00:06:56.978846   42564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:06:56.994866   42564 provision.go:87] duration metric: took 363.718486ms to configureAuth
	I1212 00:06:56.994888   42564 ubuntu.go:206] setting minikube options for container-runtime
	I1212 00:06:56.995159   42564 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 00:06:56.995165   42564 machine.go:97] duration metric: took 3.871340713s to provisionDockerMachine
	I1212 00:06:56.995171   42564 client.go:176] duration metric: took 9.554963583s to LocalClient.Create
	I1212 00:06:56.995187   42564 start.go:167] duration metric: took 9.555010139s to libmachine.API.Create "functional-767012"
	I1212 00:06:56.995193   42564 start.go:293] postStartSetup for "functional-767012" (driver="docker")
	I1212 00:06:56.995203   42564 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:06:56.995259   42564 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:06:56.995315   42564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:06:57.015904   42564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:06:57.122793   42564 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:06:57.125967   42564 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:06:57.125984   42564 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 00:06:57.125995   42564 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/addons for local assets ...
	I1212 00:06:57.126050   42564 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/files for local assets ...
	I1212 00:06:57.126132   42564 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem -> 42902.pem in /etc/ssl/certs
	I1212 00:06:57.126210   42564 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/test/nested/copy/4290/hosts -> hosts in /etc/test/nested/copy/4290
	I1212 00:06:57.126257   42564 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4290
	I1212 00:06:57.133806   42564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /etc/ssl/certs/42902.pem (1708 bytes)
	I1212 00:06:57.151171   42564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/test/nested/copy/4290/hosts --> /etc/test/nested/copy/4290/hosts (40 bytes)
	I1212 00:06:57.168939   42564 start.go:296] duration metric: took 173.73226ms for postStartSetup
	I1212 00:06:57.169298   42564 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-767012
	I1212 00:06:57.186152   42564 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/config.json ...
	I1212 00:06:57.186427   42564 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:06:57.186467   42564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:06:57.203405   42564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:06:57.303891   42564 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:06:57.308179   42564 start.go:128] duration metric: took 9.873412289s to createHost
	I1212 00:06:57.308194   42564 start.go:83] releasing machines lock for "functional-767012", held for 9.873568466s
	I1212 00:06:57.308273   42564 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-767012
	I1212 00:06:57.327166   42564 out.go:179] * Found network options:
	I1212 00:06:57.330146   42564 out.go:179]   - HTTP_PROXY=localhost:36001
	W1212 00:06:57.333163   42564 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1212 00:06:57.336043   42564 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1212 00:06:57.338865   42564 ssh_runner.go:195] Run: cat /version.json
	I1212 00:06:57.338889   42564 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:06:57.338905   42564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:06:57.338970   42564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:06:57.359517   42564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:06:57.372630   42564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:06:57.554474   42564 ssh_runner.go:195] Run: systemctl --version
	I1212 00:06:57.560840   42564 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:06:57.565029   42564 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:06:57.565098   42564 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:06:57.591094   42564 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1212 00:06:57.591106   42564 start.go:496] detecting cgroup driver to use...
	I1212 00:06:57.591137   42564 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 00:06:57.591184   42564 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 00:06:57.606473   42564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 00:06:57.619631   42564 docker.go:218] disabling cri-docker service (if available) ...
	I1212 00:06:57.619683   42564 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:06:57.637563   42564 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:06:57.656156   42564 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:06:57.782296   42564 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:06:57.917121   42564 docker.go:234] disabling docker service ...
	I1212 00:06:57.917206   42564 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:06:57.938400   42564 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:06:57.951630   42564 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:06:58.080881   42564 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:06:58.209260   42564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:06:58.222334   42564 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:06:58.236073   42564 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 00:06:58.244429   42564 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 00:06:58.252806   42564 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 00:06:58.252872   42564 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 00:06:58.261363   42564 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 00:06:58.269629   42564 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 00:06:58.277786   42564 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 00:06:58.285990   42564 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:06:58.293335   42564 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 00:06:58.301289   42564 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 00:06:58.309301   42564 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 00:06:58.318040   42564 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:06:58.325342   42564 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:06:58.332418   42564 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:06:58.455476   42564 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 00:06:58.579818   42564 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1212 00:06:58.579878   42564 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1212 00:06:58.584006   42564 start.go:564] Will wait 60s for crictl version
	I1212 00:06:58.584061   42564 ssh_runner.go:195] Run: which crictl
	I1212 00:06:58.587557   42564 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 00:06:58.612139   42564 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1212 00:06:58.612207   42564 ssh_runner.go:195] Run: containerd --version
	I1212 00:06:58.632080   42564 ssh_runner.go:195] Run: containerd --version
	I1212 00:06:58.656605   42564 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1212 00:06:58.659604   42564 cli_runner.go:164] Run: docker network inspect functional-767012 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:06:58.675390   42564 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 00:06:58.678842   42564 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:06:58.688190   42564 kubeadm.go:884] updating cluster {Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:06:58.688287   42564 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 00:06:58.688347   42564 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:06:58.713204   42564 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 00:06:58.713216   42564 containerd.go:534] Images already preloaded, skipping extraction
	I1212 00:06:58.713271   42564 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:06:58.737086   42564 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 00:06:58.737097   42564 cache_images.go:86] Images are preloaded, skipping loading
	I1212 00:06:58.737102   42564 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1212 00:06:58.737183   42564 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-767012 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:06:58.737242   42564 ssh_runner.go:195] Run: sudo crictl info
	I1212 00:06:58.760814   42564 cni.go:84] Creating CNI manager for ""
	I1212 00:06:58.760824   42564 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 00:06:58.760842   42564 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 00:06:58.760867   42564 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-767012 NodeName:functional-767012 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:06:58.760981   42564 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-767012"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:06:58.761045   42564 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 00:06:58.768439   42564 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 00:06:58.768494   42564 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:06:58.775651   42564 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1212 00:06:58.787812   42564 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 00:06:58.800601   42564 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1212 00:06:58.813044   42564 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:06:58.816406   42564 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:06:58.826172   42564 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:06:58.941368   42564 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:06:58.958454   42564 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012 for IP: 192.168.49.2
	I1212 00:06:58.958464   42564 certs.go:195] generating shared ca certs ...
	I1212 00:06:58.958478   42564 certs.go:227] acquiring lock for ca certs: {Name:mk18ed2fce74cbc4ee01c0f71e2dbdd98ccce1cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:06:58.958645   42564 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key
	I1212 00:06:58.958697   42564 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key
	I1212 00:06:58.958703   42564 certs.go:257] generating profile certs ...
	I1212 00:06:58.958767   42564 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.key
	I1212 00:06:58.958777   42564 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt with IP's: []
	I1212 00:06:59.414847   42564 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt ...
	I1212 00:06:59.414877   42564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt: {Name:mk2e53d59ca31de5ec122adc19e355e9d6363f31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:06:59.415090   42564 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.key ...
	I1212 00:06:59.415097   42564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.key: {Name:mkc5410a305906ba4b2f4736459e0bd9517fa04d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:06:59.415186   42564 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.key.fcbff5a4
	I1212 00:06:59.415197   42564 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.crt.fcbff5a4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1212 00:06:59.614173   42564 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.crt.fcbff5a4 ...
	I1212 00:06:59.614187   42564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.crt.fcbff5a4: {Name:mkcdecde159a1729b44ee0ef69d47828ae2fafaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:06:59.614359   42564 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.key.fcbff5a4 ...
	I1212 00:06:59.614366   42564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.key.fcbff5a4: {Name:mk527300555574f037458024e4c71b6423b90770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:06:59.614446   42564 certs.go:382] copying /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.crt.fcbff5a4 -> /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.crt
	I1212 00:06:59.614523   42564 certs.go:386] copying /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.key.fcbff5a4 -> /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.key
	I1212 00:06:59.614573   42564 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.key
	I1212 00:06:59.614588   42564 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.crt with IP's: []
	I1212 00:06:59.869503   42564 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.crt ...
	I1212 00:06:59.869517   42564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.crt: {Name:mk283ec0ae854b1bd17de590b872acb2c9ee389c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:06:59.869703   42564 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.key ...
	I1212 00:06:59.869710   42564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.key: {Name:mk29d442703b1a29103830f9d8cac58a7d3cd2db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:06:59.869899   42564 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem (1338 bytes)
	W1212 00:06:59.869948   42564 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290_empty.pem, impossibly tiny 0 bytes
	I1212 00:06:59.869958   42564 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 00:06:59.869987   42564 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem (1082 bytes)
	I1212 00:06:59.870013   42564 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:06:59.870035   42564 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem (1675 bytes)
	I1212 00:06:59.870078   42564 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem (1708 bytes)
	I1212 00:06:59.870629   42564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:06:59.888316   42564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 00:06:59.906312   42564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:06:59.924325   42564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:06:59.941487   42564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 00:06:59.958418   42564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 00:06:59.975811   42564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:06:59.992136   42564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:07:00.045062   42564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem --> /usr/share/ca-certificates/4290.pem (1338 bytes)
	I1212 00:07:00.101976   42564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /usr/share/ca-certificates/42902.pem (1708 bytes)
	I1212 00:07:00.178024   42564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:07:00.226934   42564 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:07:00.256909   42564 ssh_runner.go:195] Run: openssl version
	I1212 00:07:00.275933   42564 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4290.pem
	I1212 00:07:00.290818   42564 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4290.pem /etc/ssl/certs/4290.pem
	I1212 00:07:00.302618   42564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4290.pem
	I1212 00:07:00.316383   42564 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:06 /usr/share/ca-certificates/4290.pem
	I1212 00:07:00.316454   42564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4290.pem
	I1212 00:07:00.375820   42564 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 00:07:00.384711   42564 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4290.pem /etc/ssl/certs/51391683.0
	I1212 00:07:00.394134   42564 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42902.pem
	I1212 00:07:00.403394   42564 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42902.pem /etc/ssl/certs/42902.pem
	I1212 00:07:00.412367   42564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42902.pem
	I1212 00:07:00.417362   42564 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:06 /usr/share/ca-certificates/42902.pem
	I1212 00:07:00.417441   42564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42902.pem
	I1212 00:07:00.465285   42564 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 00:07:00.473542   42564 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/42902.pem /etc/ssl/certs/3ec20f2e.0
	I1212 00:07:00.481276   42564 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:07:00.489069   42564 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:07:00.496931   42564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:07:00.500927   42564 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:07:00.500992   42564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:07:00.542408   42564 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:07:00.550763   42564 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 00:07:00.559527   42564 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:07:00.564282   42564 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 00:07:00.564324   42564 kubeadm.go:401] StartCluster: {Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:07:00.564397   42564 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1212 00:07:00.564465   42564 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:07:00.601747   42564 cri.go:89] found id: ""
	I1212 00:07:00.601804   42564 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:07:00.609766   42564 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:07:00.617686   42564 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:07:00.617745   42564 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:07:00.626117   42564 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:07:00.626126   42564 kubeadm.go:158] found existing configuration files:
	
	I1212 00:07:00.626189   42564 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 00:07:00.634065   42564 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 00:07:00.634122   42564 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 00:07:00.641914   42564 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 00:07:00.649757   42564 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 00:07:00.649818   42564 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:07:00.657377   42564 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 00:07:00.665403   42564 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 00:07:00.665461   42564 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:07:00.673023   42564 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 00:07:00.680896   42564 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 00:07:00.680953   42564 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:07:00.688727   42564 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:07:00.726537   42564 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 00:07:00.726756   42564 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 00:07:00.803761   42564 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:07:00.803824   42564 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 00:07:00.803859   42564 kubeadm.go:319] OS: Linux
	I1212 00:07:00.803902   42564 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 00:07:00.803949   42564 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 00:07:00.803995   42564 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 00:07:00.804042   42564 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 00:07:00.804089   42564 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 00:07:00.804136   42564 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 00:07:00.804180   42564 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 00:07:00.804235   42564 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 00:07:00.804286   42564 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 00:07:00.881105   42564 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:07:00.881209   42564 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:07:00.881299   42564 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 00:07:00.886747   42564 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:07:00.893249   42564 out.go:252]   - Generating certificates and keys ...
	I1212 00:07:00.893345   42564 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 00:07:00.893418   42564 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 00:07:00.973301   42564 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 00:07:01.232024   42564 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 00:07:01.342941   42564 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 00:07:01.419328   42564 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 00:07:01.773039   42564 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 00:07:01.773298   42564 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-767012 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1212 00:07:01.973487   42564 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 00:07:01.973758   42564 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-767012 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1212 00:07:02.216434   42564 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 00:07:02.436493   42564 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 00:07:02.996087   42564 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 00:07:02.996368   42564 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:07:03.197956   42564 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:07:03.458438   42564 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 00:07:03.661265   42564 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:07:03.790618   42564 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:07:04.202621   42564 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:07:04.203482   42564 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:07:04.207624   42564 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:07:04.211147   42564 out.go:252]   - Booting up control plane ...
	I1212 00:07:04.211246   42564 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:07:04.211327   42564 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:07:04.212009   42564 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:07:04.242961   42564 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:07:04.243110   42564 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 00:07:04.250760   42564 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 00:07:04.251035   42564 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:07:04.251218   42564 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 00:07:04.387520   42564 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 00:07:04.387634   42564 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 00:11:04.388521   42564 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001247859s
	I1212 00:11:04.388544   42564 kubeadm.go:319] 
	I1212 00:11:04.388606   42564 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 00:11:04.388660   42564 kubeadm.go:319] 	- The kubelet is not running
	I1212 00:11:04.388770   42564 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 00:11:04.388777   42564 kubeadm.go:319] 
	I1212 00:11:04.388882   42564 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 00:11:04.388913   42564 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 00:11:04.388943   42564 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 00:11:04.388958   42564 kubeadm.go:319] 
	I1212 00:11:04.393990   42564 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 00:11:04.394393   42564 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 00:11:04.394494   42564 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:11:04.394713   42564 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 00:11:04.394717   42564 kubeadm.go:319] 
	I1212 00:11:04.394780   42564 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1212 00:11:04.394878   42564 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-767012 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-767012 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001247859s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 00:11:04.394973   42564 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1212 00:11:04.817043   42564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:11:04.830276   42564 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:11:04.830330   42564 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:11:04.838157   42564 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:11:04.838165   42564 kubeadm.go:158] found existing configuration files:
	
	I1212 00:11:04.838213   42564 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 00:11:04.846139   42564 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 00:11:04.846194   42564 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 00:11:04.853619   42564 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 00:11:04.861776   42564 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 00:11:04.861839   42564 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:11:04.869470   42564 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 00:11:04.877266   42564 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 00:11:04.877321   42564 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:11:04.884704   42564 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 00:11:04.892277   42564 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 00:11:04.892335   42564 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:11:04.900247   42564 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:11:04.941168   42564 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 00:11:04.941217   42564 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 00:11:05.024404   42564 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:11:05.024470   42564 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 00:11:05.024505   42564 kubeadm.go:319] OS: Linux
	I1212 00:11:05.024548   42564 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 00:11:05.024595   42564 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 00:11:05.024642   42564 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 00:11:05.024688   42564 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 00:11:05.024735   42564 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 00:11:05.024782   42564 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 00:11:05.024826   42564 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 00:11:05.024873   42564 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 00:11:05.024918   42564 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 00:11:05.098371   42564 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:11:05.098488   42564 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:11:05.098583   42564 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 00:11:05.107418   42564 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:11:05.112606   42564 out.go:252]   - Generating certificates and keys ...
	I1212 00:11:05.112694   42564 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 00:11:05.112764   42564 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 00:11:05.112847   42564 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 00:11:05.112919   42564 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 00:11:05.112998   42564 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 00:11:05.113067   42564 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 00:11:05.113137   42564 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 00:11:05.113203   42564 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 00:11:05.113287   42564 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 00:11:05.113367   42564 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 00:11:05.113415   42564 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 00:11:05.113477   42564 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:11:05.267574   42564 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:11:05.388677   42564 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 00:11:05.446578   42564 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:11:05.627991   42564 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:11:05.985910   42564 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:11:05.986540   42564 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:11:05.989374   42564 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:11:05.992461   42564 out.go:252]   - Booting up control plane ...
	I1212 00:11:05.992578   42564 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:11:05.992669   42564 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:11:05.993416   42564 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:11:06.019060   42564 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:11:06.019163   42564 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 00:11:06.028166   42564 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 00:11:06.029163   42564 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:11:06.030602   42564 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 00:11:06.184076   42564 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 00:11:06.184190   42564 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 00:15:06.183993   42564 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000289392s
	I1212 00:15:06.184020   42564 kubeadm.go:319] 
	I1212 00:15:06.184074   42564 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 00:15:06.184105   42564 kubeadm.go:319] 	- The kubelet is not running
	I1212 00:15:06.184203   42564 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 00:15:06.184206   42564 kubeadm.go:319] 
	I1212 00:15:06.184329   42564 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 00:15:06.184371   42564 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 00:15:06.184402   42564 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 00:15:06.184406   42564 kubeadm.go:319] 
	I1212 00:15:06.188762   42564 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 00:15:06.189222   42564 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 00:15:06.189347   42564 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:15:06.189587   42564 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 00:15:06.189592   42564 kubeadm.go:319] 
	I1212 00:15:06.189666   42564 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 00:15:06.189706   42564 kubeadm.go:403] duration metric: took 8m5.625385304s to StartCluster
	I1212 00:15:06.189737   42564 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:15:06.189801   42564 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:15:06.218090   42564 cri.go:89] found id: ""
	I1212 00:15:06.218103   42564 logs.go:282] 0 containers: []
	W1212 00:15:06.218110   42564 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:15:06.218115   42564 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:15:06.218176   42564 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:15:06.245709   42564 cri.go:89] found id: ""
	I1212 00:15:06.245723   42564 logs.go:282] 0 containers: []
	W1212 00:15:06.245730   42564 logs.go:284] No container was found matching "etcd"
	I1212 00:15:06.245734   42564 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:15:06.245810   42564 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:15:06.270125   42564 cri.go:89] found id: ""
	I1212 00:15:06.270138   42564 logs.go:282] 0 containers: []
	W1212 00:15:06.270144   42564 logs.go:284] No container was found matching "coredns"
	I1212 00:15:06.270149   42564 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:15:06.270208   42564 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:15:06.293928   42564 cri.go:89] found id: ""
	I1212 00:15:06.293941   42564 logs.go:282] 0 containers: []
	W1212 00:15:06.293948   42564 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:15:06.293953   42564 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:15:06.294011   42564 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:15:06.319402   42564 cri.go:89] found id: ""
	I1212 00:15:06.319415   42564 logs.go:282] 0 containers: []
	W1212 00:15:06.319423   42564 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:15:06.319428   42564 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:15:06.319489   42564 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:15:06.343928   42564 cri.go:89] found id: ""
	I1212 00:15:06.343942   42564 logs.go:282] 0 containers: []
	W1212 00:15:06.343948   42564 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:15:06.343956   42564 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:15:06.344020   42564 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:15:06.369782   42564 cri.go:89] found id: ""
	I1212 00:15:06.369796   42564 logs.go:282] 0 containers: []
	W1212 00:15:06.369803   42564 logs.go:284] No container was found matching "kindnet"
	I1212 00:15:06.369811   42564 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:15:06.369825   42564 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:15:06.436703   42564 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:15:06.428587    4768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:15:06.429257    4768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:15:06.430360    4768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:15:06.430878    4768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:15:06.432332    4768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:15:06.428587    4768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:15:06.429257    4768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:15:06.430360    4768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:15:06.430878    4768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:15:06.432332    4768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:15:06.436714   42564 logs.go:123] Gathering logs for containerd ...
	I1212 00:15:06.436726   42564 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:15:06.474894   42564 logs.go:123] Gathering logs for container status ...
	I1212 00:15:06.474913   42564 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:15:06.504172   42564 logs.go:123] Gathering logs for kubelet ...
	I1212 00:15:06.504187   42564 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:15:06.563885   42564 logs.go:123] Gathering logs for dmesg ...
	I1212 00:15:06.563905   42564 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1212 00:15:06.579505   42564 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000289392s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 00:15:06.579538   42564 out.go:285] * 
	W1212 00:15:06.579600   42564 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000289392s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 00:15:06.579611   42564 out.go:285] * 
	W1212 00:15:06.581746   42564 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:15:06.588757   42564 out.go:203] 
	W1212 00:15:06.592347   42564 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000289392s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 00:15:06.592387   42564 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 00:15:06.592406   42564 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 00:15:06.595444   42564 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.522476018Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.522548962Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.522635609Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.522710514Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.522769009Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.522829711Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.522884440Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.522942590Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.523041323Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.523135421Z" level=info msg="Connect containerd service"
	Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.523513541Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.524145628Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.537965981Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.538182508Z" level=info msg="Start subscribing containerd event"
	Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.538339974Z" level=info msg="Start recovering state"
	Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.538280330Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.577805643Z" level=info msg="Start event monitor"
	Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.577855096Z" level=info msg="Start cni network conf syncer for default"
	Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.577866526Z" level=info msg="Start streaming server"
	Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.577884396Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.577895350Z" level=info msg="runtime interface starting up..."
	Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.577903810Z" level=info msg="starting plugins..."
	Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.577919958Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 00:06:58 functional-767012 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.579450564Z" level=info msg="containerd successfully booted in 0.081603s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:15:07.572975    4898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:15:07.573740    4898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:15:07.575493    4898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:15:07.576098    4898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:15:07.577663    4898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec11 23:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014465] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.504479] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.038126] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.726220] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.947343] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 00:15:07 up 57 min,  0 user,  load average: 0.21, 0.46, 0.69
	Linux functional-767012 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 00:15:04 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:15:05 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 12 00:15:05 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:15:05 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:15:05 functional-767012 kubelet[4698]: E1212 00:15:05.084005    4698 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:15:05 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:15:05 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:15:05 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 12 00:15:05 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:15:05 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:15:05 functional-767012 kubelet[4704]: E1212 00:15:05.833868    4704 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:15:05 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:15:05 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:15:06 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 12 00:15:06 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:15:06 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:15:06 functional-767012 kubelet[4788]: E1212 00:15:06.620191    4788 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:15:06 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:15:06 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:15:07 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 12 00:15:07 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:15:07 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:15:07 functional-767012 kubelet[4833]: E1212 00:15:07.351174    4833 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:15:07 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:15:07 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-767012 -n functional-767012
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-767012 -n functional-767012: exit status 6 (397.417215ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 00:15:08.129246   48264 status.go:458] kubeconfig endpoint: get endpoint: "functional-767012" does not appear in /home/jenkins/minikube-integration/22101-2343/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "functional-767012" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (500.99s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (368.59s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1212 00:15:08.143477    4290 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-767012 --alsologtostderr -v=8
E1212 00:15:57.042930    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:16:24.750214    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:18:41.615349    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:20:04.688246    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:20:57.042897    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-767012 --alsologtostderr -v=8: exit status 80 (6m5.664062506s)

                                                
                                                
-- stdout --
	* [functional-767012] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22101
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-767012" primary control-plane node in "functional-767012" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:15:08.188216   48339 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:15:08.188435   48339 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:15:08.188463   48339 out.go:374] Setting ErrFile to fd 2...
	I1212 00:15:08.188485   48339 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:15:08.188893   48339 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	I1212 00:15:08.189436   48339 out.go:368] Setting JSON to false
	I1212 00:15:08.190327   48339 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3455,"bootTime":1765495054,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1212 00:15:08.190468   48339 start.go:143] virtualization:  
	I1212 00:15:08.194075   48339 out.go:179] * [functional-767012] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 00:15:08.197745   48339 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:15:08.197889   48339 notify.go:221] Checking for updates...
	I1212 00:15:08.203623   48339 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:15:08.206559   48339 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 00:15:08.209313   48339 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	I1212 00:15:08.212202   48339 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:15:08.215231   48339 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:15:08.218454   48339 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 00:15:08.218601   48339 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:15:08.244528   48339 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 00:15:08.244655   48339 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:15:08.299617   48339 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 00:15:08.290252755 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:15:08.299730   48339 docker.go:319] overlay module found
	I1212 00:15:08.302863   48339 out.go:179] * Using the docker driver based on existing profile
	I1212 00:15:08.305730   48339 start.go:309] selected driver: docker
	I1212 00:15:08.305754   48339 start.go:927] validating driver "docker" against &{Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:15:08.305854   48339 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:15:08.305953   48339 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:15:08.359436   48339 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 00:15:08.349975764 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:15:08.359860   48339 cni.go:84] Creating CNI manager for ""
	I1212 00:15:08.359920   48339 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 00:15:08.359966   48339 start.go:353] cluster config:
	{Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:15:08.363136   48339 out.go:179] * Starting "functional-767012" primary control-plane node in "functional-767012" cluster
	I1212 00:15:08.365917   48339 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1212 00:15:08.368829   48339 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 00:15:08.371809   48339 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 00:15:08.371858   48339 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1212 00:15:08.371872   48339 cache.go:65] Caching tarball of preloaded images
	I1212 00:15:08.371970   48339 preload.go:238] Found /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1212 00:15:08.371992   48339 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1212 00:15:08.372099   48339 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/config.json ...
	I1212 00:15:08.372328   48339 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 00:15:08.391509   48339 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 00:15:08.391533   48339 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 00:15:08.391552   48339 cache.go:243] Successfully downloaded all kic artifacts
	I1212 00:15:08.391583   48339 start.go:360] acquireMachinesLock for functional-767012: {Name:mk41cf89e93a3830367886ebbef2bb8f6e99e3f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:15:08.391643   48339 start.go:364] duration metric: took 36.464µs to acquireMachinesLock for "functional-767012"
	I1212 00:15:08.391666   48339 start.go:96] Skipping create...Using existing machine configuration
	I1212 00:15:08.391675   48339 fix.go:54] fixHost starting: 
	I1212 00:15:08.391939   48339 cli_runner.go:164] Run: docker container inspect functional-767012 --format={{.State.Status}}
	I1212 00:15:08.408717   48339 fix.go:112] recreateIfNeeded on functional-767012: state=Running err=<nil>
	W1212 00:15:08.408748   48339 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 00:15:08.411849   48339 out.go:252] * Updating the running docker "functional-767012" container ...
	I1212 00:15:08.411881   48339 machine.go:94] provisionDockerMachine start ...
	I1212 00:15:08.411961   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:08.429482   48339 main.go:143] libmachine: Using SSH client type: native
	I1212 00:15:08.429817   48339 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 00:15:08.429834   48339 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 00:15:08.578648   48339 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-767012
	
	I1212 00:15:08.578671   48339 ubuntu.go:182] provisioning hostname "functional-767012"
	I1212 00:15:08.578741   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:08.596871   48339 main.go:143] libmachine: Using SSH client type: native
	I1212 00:15:08.597187   48339 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 00:15:08.597227   48339 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-767012 && echo "functional-767012" | sudo tee /etc/hostname
	I1212 00:15:08.759668   48339 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-767012
	
	I1212 00:15:08.759746   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:08.776780   48339 main.go:143] libmachine: Using SSH client type: native
	I1212 00:15:08.777096   48339 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 00:15:08.777119   48339 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-767012' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-767012/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-767012' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:15:08.931523   48339 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:15:08.931550   48339 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-2343/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-2343/.minikube}
	I1212 00:15:08.931582   48339 ubuntu.go:190] setting up certificates
	I1212 00:15:08.931592   48339 provision.go:84] configureAuth start
	I1212 00:15:08.931653   48339 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-767012
	I1212 00:15:08.952406   48339 provision.go:143] copyHostCerts
	I1212 00:15:08.952454   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem
	I1212 00:15:08.952497   48339 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem, removing ...
	I1212 00:15:08.952507   48339 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem
	I1212 00:15:08.952585   48339 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem (1675 bytes)
	I1212 00:15:08.952685   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem
	I1212 00:15:08.952707   48339 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem, removing ...
	I1212 00:15:08.952712   48339 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem
	I1212 00:15:08.952745   48339 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem (1082 bytes)
	I1212 00:15:08.952800   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem
	I1212 00:15:08.952821   48339 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem, removing ...
	I1212 00:15:08.952828   48339 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem
	I1212 00:15:08.952852   48339 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem (1123 bytes)
	I1212 00:15:08.952913   48339 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem org=jenkins.functional-767012 san=[127.0.0.1 192.168.49.2 functional-767012 localhost minikube]
	I1212 00:15:09.089842   48339 provision.go:177] copyRemoteCerts
	I1212 00:15:09.089908   48339 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:15:09.089956   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:09.108065   48339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:15:09.210645   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 00:15:09.210700   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 00:15:09.228116   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 00:15:09.228176   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 00:15:09.245824   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 00:15:09.245889   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:15:09.263086   48339 provision.go:87] duration metric: took 331.470752ms to configureAuth
	I1212 00:15:09.263116   48339 ubuntu.go:206] setting minikube options for container-runtime
	I1212 00:15:09.263293   48339 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 00:15:09.263306   48339 machine.go:97] duration metric: took 851.418761ms to provisionDockerMachine
	I1212 00:15:09.263315   48339 start.go:293] postStartSetup for "functional-767012" (driver="docker")
	I1212 00:15:09.263326   48339 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:15:09.263390   48339 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:15:09.263439   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:09.281753   48339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:15:09.386868   48339 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:15:09.390421   48339 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1212 00:15:09.390442   48339 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1212 00:15:09.390447   48339 command_runner.go:130] > VERSION_ID="12"
	I1212 00:15:09.390451   48339 command_runner.go:130] > VERSION="12 (bookworm)"
	I1212 00:15:09.390456   48339 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1212 00:15:09.390460   48339 command_runner.go:130] > ID=debian
	I1212 00:15:09.390464   48339 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1212 00:15:09.390469   48339 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1212 00:15:09.390475   48339 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1212 00:15:09.390546   48339 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:15:09.390568   48339 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 00:15:09.390580   48339 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/addons for local assets ...
	I1212 00:15:09.390640   48339 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/files for local assets ...
	I1212 00:15:09.390732   48339 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem -> 42902.pem in /etc/ssl/certs
	I1212 00:15:09.390742   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem -> /etc/ssl/certs/42902.pem
	I1212 00:15:09.390816   48339 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/test/nested/copy/4290/hosts -> hosts in /etc/test/nested/copy/4290
	I1212 00:15:09.390824   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/test/nested/copy/4290/hosts -> /etc/test/nested/copy/4290/hosts
	I1212 00:15:09.390867   48339 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4290
	I1212 00:15:09.398526   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /etc/ssl/certs/42902.pem (1708 bytes)
	I1212 00:15:09.416059   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/test/nested/copy/4290/hosts --> /etc/test/nested/copy/4290/hosts (40 bytes)
	I1212 00:15:09.433237   48339 start.go:296] duration metric: took 169.908089ms for postStartSetup
	I1212 00:15:09.433321   48339 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:15:09.433384   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:09.450800   48339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:15:09.556105   48339 command_runner.go:130] > 14%
	I1212 00:15:09.557034   48339 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:15:09.562380   48339 command_runner.go:130] > 169G
	I1212 00:15:09.562946   48339 fix.go:56] duration metric: took 1.171267005s for fixHost
	I1212 00:15:09.562967   48339 start.go:83] releasing machines lock for "functional-767012", held for 1.171312429s
	I1212 00:15:09.563050   48339 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-767012
	I1212 00:15:09.582602   48339 ssh_runner.go:195] Run: cat /version.json
	I1212 00:15:09.582654   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:09.582889   48339 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:15:09.582947   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:09.601106   48339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:15:09.627042   48339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:15:09.706722   48339 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1212 00:15:09.706847   48339 ssh_runner.go:195] Run: systemctl --version
	I1212 00:15:09.800321   48339 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 00:15:09.800390   48339 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1212 00:15:09.800423   48339 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1212 00:15:09.800514   48339 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 00:15:09.804624   48339 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 00:15:09.804945   48339 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:15:09.805036   48339 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:15:09.812955   48339 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 00:15:09.813030   48339 start.go:496] detecting cgroup driver to use...
	I1212 00:15:09.813095   48339 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 00:15:09.813242   48339 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 00:15:09.829352   48339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 00:15:09.842558   48339 docker.go:218] disabling cri-docker service (if available) ...
	I1212 00:15:09.842620   48339 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:15:09.858553   48339 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:15:09.872251   48339 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:15:10.008398   48339 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:15:10.140361   48339 docker.go:234] disabling docker service ...
	I1212 00:15:10.140425   48339 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:15:10.156860   48339 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:15:10.170461   48339 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:15:10.304156   48339 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:15:10.452566   48339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:15:10.465745   48339 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:15:10.479553   48339 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1212 00:15:10.480868   48339 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 00:15:10.489677   48339 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 00:15:10.498827   48339 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 00:15:10.498939   48339 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 00:15:10.508103   48339 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 00:15:10.516726   48339 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 00:15:10.525281   48339 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 00:15:10.533906   48339 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:15:10.541697   48339 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 00:15:10.550595   48339 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 00:15:10.559645   48339 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 00:15:10.568588   48339 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:15:10.575412   48339 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 00:15:10.576366   48339 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:15:10.583788   48339 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:15:10.698857   48339 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 00:15:10.837222   48339 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1212 00:15:10.837316   48339 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1212 00:15:10.841505   48339 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1212 00:15:10.841543   48339 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 00:15:10.841551   48339 command_runner.go:130] > Device: 0,72	Inode: 1614        Links: 1
	I1212 00:15:10.841558   48339 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 00:15:10.841564   48339 command_runner.go:130] > Access: 2025-12-12 00:15:10.793315522 +0000
	I1212 00:15:10.841569   48339 command_runner.go:130] > Modify: 2025-12-12 00:15:10.793315522 +0000
	I1212 00:15:10.841575   48339 command_runner.go:130] > Change: 2025-12-12 00:15:10.793315522 +0000
	I1212 00:15:10.841583   48339 command_runner.go:130] >  Birth: -
	I1212 00:15:10.841612   48339 start.go:564] Will wait 60s for crictl version
	I1212 00:15:10.841667   48339 ssh_runner.go:195] Run: which crictl
	I1212 00:15:10.845418   48339 command_runner.go:130] > /usr/local/bin/crictl
	I1212 00:15:10.845528   48339 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 00:15:10.867684   48339 command_runner.go:130] > Version:  0.1.0
	I1212 00:15:10.867710   48339 command_runner.go:130] > RuntimeName:  containerd
	I1212 00:15:10.867718   48339 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1212 00:15:10.867725   48339 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 00:15:10.869691   48339 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1212 00:15:10.869761   48339 ssh_runner.go:195] Run: containerd --version
	I1212 00:15:10.889630   48339 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1212 00:15:10.891644   48339 ssh_runner.go:195] Run: containerd --version
	I1212 00:15:10.909520   48339 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1212 00:15:10.917318   48339 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1212 00:15:10.920211   48339 cli_runner.go:164] Run: docker network inspect functional-767012 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:15:10.936971   48339 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 00:15:10.940949   48339 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1212 00:15:10.941183   48339 kubeadm.go:884] updating cluster {Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:15:10.941314   48339 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 00:15:10.941401   48339 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:15:10.964902   48339 command_runner.go:130] > {
	I1212 00:15:10.964923   48339 command_runner.go:130] >   "images":  [
	I1212 00:15:10.964934   48339 command_runner.go:130] >     {
	I1212 00:15:10.964944   48339 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1212 00:15:10.964949   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.964954   48339 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1212 00:15:10.964957   48339 command_runner.go:130] >       ],
	I1212 00:15:10.964962   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.964974   48339 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1212 00:15:10.964977   48339 command_runner.go:130] >       ],
	I1212 00:15:10.964982   48339 command_runner.go:130] >       "size":  "40636774",
	I1212 00:15:10.964989   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.964994   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.965005   48339 command_runner.go:130] >     },
	I1212 00:15:10.965009   48339 command_runner.go:130] >     {
	I1212 00:15:10.965017   48339 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1212 00:15:10.965023   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.965029   48339 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 00:15:10.965032   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965036   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.965047   48339 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1212 00:15:10.965050   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965054   48339 command_runner.go:130] >       "size":  "8034419",
	I1212 00:15:10.965058   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.965062   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.965068   48339 command_runner.go:130] >     },
	I1212 00:15:10.965071   48339 command_runner.go:130] >     {
	I1212 00:15:10.965079   48339 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1212 00:15:10.965085   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.965092   48339 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1212 00:15:10.965095   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965101   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.965112   48339 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1212 00:15:10.965115   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965121   48339 command_runner.go:130] >       "size":  "21168808",
	I1212 00:15:10.965129   48339 command_runner.go:130] >       "username":  "nonroot",
	I1212 00:15:10.965134   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.965137   48339 command_runner.go:130] >     },
	I1212 00:15:10.965143   48339 command_runner.go:130] >     {
	I1212 00:15:10.965152   48339 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1212 00:15:10.965164   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.965169   48339 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1212 00:15:10.965172   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965176   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.965190   48339 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1212 00:15:10.965193   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965199   48339 command_runner.go:130] >       "size":  "21136588",
	I1212 00:15:10.965203   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.965218   48339 command_runner.go:130] >         "value":  "0"
	I1212 00:15:10.965224   48339 command_runner.go:130] >       },
	I1212 00:15:10.965228   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.965231   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.965235   48339 command_runner.go:130] >     },
	I1212 00:15:10.965238   48339 command_runner.go:130] >     {
	I1212 00:15:10.965245   48339 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1212 00:15:10.965251   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.965256   48339 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1212 00:15:10.965262   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965266   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.965274   48339 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1212 00:15:10.965278   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965285   48339 command_runner.go:130] >       "size":  "24678359",
	I1212 00:15:10.965288   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.965296   48339 command_runner.go:130] >         "value":  "0"
	I1212 00:15:10.965302   48339 command_runner.go:130] >       },
	I1212 00:15:10.965306   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.965311   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.965314   48339 command_runner.go:130] >     },
	I1212 00:15:10.965323   48339 command_runner.go:130] >     {
	I1212 00:15:10.965332   48339 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1212 00:15:10.965345   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.965350   48339 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1212 00:15:10.965354   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965358   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.965373   48339 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1212 00:15:10.965377   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965381   48339 command_runner.go:130] >       "size":  "20661043",
	I1212 00:15:10.965385   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.965392   48339 command_runner.go:130] >         "value":  "0"
	I1212 00:15:10.965395   48339 command_runner.go:130] >       },
	I1212 00:15:10.965399   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.965403   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.965406   48339 command_runner.go:130] >     },
	I1212 00:15:10.965412   48339 command_runner.go:130] >     {
	I1212 00:15:10.965420   48339 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1212 00:15:10.965426   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.965431   48339 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1212 00:15:10.965434   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965438   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.965446   48339 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1212 00:15:10.965453   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965457   48339 command_runner.go:130] >       "size":  "22429671",
	I1212 00:15:10.965461   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.965465   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.965469   48339 command_runner.go:130] >     },
	I1212 00:15:10.965475   48339 command_runner.go:130] >     {
	I1212 00:15:10.965482   48339 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1212 00:15:10.965486   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.965492   48339 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1212 00:15:10.965497   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965502   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.965515   48339 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1212 00:15:10.965522   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965526   48339 command_runner.go:130] >       "size":  "15391364",
	I1212 00:15:10.965530   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.965534   48339 command_runner.go:130] >         "value":  "0"
	I1212 00:15:10.965539   48339 command_runner.go:130] >       },
	I1212 00:15:10.965543   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.965553   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.965556   48339 command_runner.go:130] >     },
	I1212 00:15:10.965559   48339 command_runner.go:130] >     {
	I1212 00:15:10.965566   48339 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1212 00:15:10.965570   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.965574   48339 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1212 00:15:10.965578   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965582   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.965591   48339 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1212 00:15:10.965602   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965606   48339 command_runner.go:130] >       "size":  "267939",
	I1212 00:15:10.965610   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.965614   48339 command_runner.go:130] >         "value":  "65535"
	I1212 00:15:10.965617   48339 command_runner.go:130] >       },
	I1212 00:15:10.965628   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.965632   48339 command_runner.go:130] >       "pinned":  true
	I1212 00:15:10.965635   48339 command_runner.go:130] >     }
	I1212 00:15:10.965638   48339 command_runner.go:130] >   ]
	I1212 00:15:10.965640   48339 command_runner.go:130] > }
	I1212 00:15:10.968555   48339 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 00:15:10.968581   48339 containerd.go:534] Images already preloaded, skipping extraction
	I1212 00:15:10.968640   48339 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:15:10.995305   48339 command_runner.go:130] > {
	I1212 00:15:10.995329   48339 command_runner.go:130] >   "images":  [
	I1212 00:15:10.995334   48339 command_runner.go:130] >     {
	I1212 00:15:10.995344   48339 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1212 00:15:10.995349   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.995355   48339 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1212 00:15:10.995359   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995375   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.995392   48339 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1212 00:15:10.995395   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995400   48339 command_runner.go:130] >       "size":  "40636774",
	I1212 00:15:10.995404   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.995408   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.995414   48339 command_runner.go:130] >     },
	I1212 00:15:10.995418   48339 command_runner.go:130] >     {
	I1212 00:15:10.995429   48339 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1212 00:15:10.995438   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.995444   48339 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 00:15:10.995448   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995452   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.995466   48339 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1212 00:15:10.995470   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995475   48339 command_runner.go:130] >       "size":  "8034419",
	I1212 00:15:10.995483   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.995487   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.995490   48339 command_runner.go:130] >     },
	I1212 00:15:10.995493   48339 command_runner.go:130] >     {
	I1212 00:15:10.995500   48339 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1212 00:15:10.995506   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.995512   48339 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1212 00:15:10.995515   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995524   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.995536   48339 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1212 00:15:10.995540   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995544   48339 command_runner.go:130] >       "size":  "21168808",
	I1212 00:15:10.995554   48339 command_runner.go:130] >       "username":  "nonroot",
	I1212 00:15:10.995558   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.995561   48339 command_runner.go:130] >     },
	I1212 00:15:10.995564   48339 command_runner.go:130] >     {
	I1212 00:15:10.995572   48339 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1212 00:15:10.995583   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.995588   48339 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1212 00:15:10.995592   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995596   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.995603   48339 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1212 00:15:10.995611   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995615   48339 command_runner.go:130] >       "size":  "21136588",
	I1212 00:15:10.995619   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.995623   48339 command_runner.go:130] >         "value":  "0"
	I1212 00:15:10.995631   48339 command_runner.go:130] >       },
	I1212 00:15:10.995635   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.995639   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.995642   48339 command_runner.go:130] >     },
	I1212 00:15:10.995646   48339 command_runner.go:130] >     {
	I1212 00:15:10.995659   48339 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1212 00:15:10.995663   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.995678   48339 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1212 00:15:10.995687   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995692   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.995701   48339 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1212 00:15:10.995709   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995713   48339 command_runner.go:130] >       "size":  "24678359",
	I1212 00:15:10.995716   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.995727   48339 command_runner.go:130] >         "value":  "0"
	I1212 00:15:10.995734   48339 command_runner.go:130] >       },
	I1212 00:15:10.995738   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.995743   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.995746   48339 command_runner.go:130] >     },
	I1212 00:15:10.995749   48339 command_runner.go:130] >     {
	I1212 00:15:10.995756   48339 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1212 00:15:10.995762   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.995768   48339 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1212 00:15:10.995771   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995782   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.995795   48339 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1212 00:15:10.995798   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995802   48339 command_runner.go:130] >       "size":  "20661043",
	I1212 00:15:10.995811   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.995815   48339 command_runner.go:130] >         "value":  "0"
	I1212 00:15:10.995820   48339 command_runner.go:130] >       },
	I1212 00:15:10.995830   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.995834   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.995838   48339 command_runner.go:130] >     },
	I1212 00:15:10.995841   48339 command_runner.go:130] >     {
	I1212 00:15:10.995847   48339 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1212 00:15:10.995854   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.995859   48339 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1212 00:15:10.995863   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995867   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.995877   48339 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1212 00:15:10.995884   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995888   48339 command_runner.go:130] >       "size":  "22429671",
	I1212 00:15:10.995893   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.995902   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.995906   48339 command_runner.go:130] >     },
	I1212 00:15:10.995909   48339 command_runner.go:130] >     {
	I1212 00:15:10.995916   48339 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1212 00:15:10.995924   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.995929   48339 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1212 00:15:10.995933   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995937   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.995948   48339 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1212 00:15:10.995952   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995956   48339 command_runner.go:130] >       "size":  "15391364",
	I1212 00:15:10.995963   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.995967   48339 command_runner.go:130] >         "value":  "0"
	I1212 00:15:10.995983   48339 command_runner.go:130] >       },
	I1212 00:15:10.995993   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.995997   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.996001   48339 command_runner.go:130] >     },
	I1212 00:15:10.996004   48339 command_runner.go:130] >     {
	I1212 00:15:10.996011   48339 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1212 00:15:10.996020   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.996025   48339 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1212 00:15:10.996029   48339 command_runner.go:130] >       ],
	I1212 00:15:10.996033   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.996046   48339 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1212 00:15:10.996053   48339 command_runner.go:130] >       ],
	I1212 00:15:10.996057   48339 command_runner.go:130] >       "size":  "267939",
	I1212 00:15:10.996061   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.996065   48339 command_runner.go:130] >         "value":  "65535"
	I1212 00:15:10.996074   48339 command_runner.go:130] >       },
	I1212 00:15:10.996078   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.996086   48339 command_runner.go:130] >       "pinned":  true
	I1212 00:15:10.996089   48339 command_runner.go:130] >     }
	I1212 00:15:10.996095   48339 command_runner.go:130] >   ]
	I1212 00:15:10.996103   48339 command_runner.go:130] > }
	I1212 00:15:10.997943   48339 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 00:15:10.997972   48339 cache_images.go:86] Images are preloaded, skipping loading
	I1212 00:15:10.997981   48339 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1212 00:15:10.998119   48339 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-767012 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:15:10.998212   48339 ssh_runner.go:195] Run: sudo crictl info
	I1212 00:15:11.021367   48339 command_runner.go:130] > {
	I1212 00:15:11.021387   48339 command_runner.go:130] >   "cniconfig": {
	I1212 00:15:11.021393   48339 command_runner.go:130] >     "Networks": [
	I1212 00:15:11.021397   48339 command_runner.go:130] >       {
	I1212 00:15:11.021403   48339 command_runner.go:130] >         "Config": {
	I1212 00:15:11.021408   48339 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1212 00:15:11.021413   48339 command_runner.go:130] >           "Name": "cni-loopback",
	I1212 00:15:11.021418   48339 command_runner.go:130] >           "Plugins": [
	I1212 00:15:11.021422   48339 command_runner.go:130] >             {
	I1212 00:15:11.021426   48339 command_runner.go:130] >               "Network": {
	I1212 00:15:11.021430   48339 command_runner.go:130] >                 "ipam": {},
	I1212 00:15:11.021438   48339 command_runner.go:130] >                 "type": "loopback"
	I1212 00:15:11.021445   48339 command_runner.go:130] >               },
	I1212 00:15:11.021450   48339 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1212 00:15:11.021457   48339 command_runner.go:130] >             }
	I1212 00:15:11.021461   48339 command_runner.go:130] >           ],
	I1212 00:15:11.021470   48339 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1212 00:15:11.021474   48339 command_runner.go:130] >         },
	I1212 00:15:11.021485   48339 command_runner.go:130] >         "IFName": "lo"
	I1212 00:15:11.021489   48339 command_runner.go:130] >       }
	I1212 00:15:11.021493   48339 command_runner.go:130] >     ],
	I1212 00:15:11.021498   48339 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1212 00:15:11.021504   48339 command_runner.go:130] >     "PluginDirs": [
	I1212 00:15:11.021509   48339 command_runner.go:130] >       "/opt/cni/bin"
	I1212 00:15:11.021514   48339 command_runner.go:130] >     ],
	I1212 00:15:11.021525   48339 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1212 00:15:11.021533   48339 command_runner.go:130] >     "Prefix": "eth"
	I1212 00:15:11.021537   48339 command_runner.go:130] >   },
	I1212 00:15:11.021540   48339 command_runner.go:130] >   "config": {
	I1212 00:15:11.021546   48339 command_runner.go:130] >     "cdiSpecDirs": [
	I1212 00:15:11.021552   48339 command_runner.go:130] >       "/etc/cdi",
	I1212 00:15:11.021558   48339 command_runner.go:130] >       "/var/run/cdi"
	I1212 00:15:11.021560   48339 command_runner.go:130] >     ],
	I1212 00:15:11.021563   48339 command_runner.go:130] >     "cni": {
	I1212 00:15:11.021567   48339 command_runner.go:130] >       "binDir": "",
	I1212 00:15:11.021571   48339 command_runner.go:130] >       "binDirs": [
	I1212 00:15:11.021574   48339 command_runner.go:130] >         "/opt/cni/bin"
	I1212 00:15:11.021577   48339 command_runner.go:130] >       ],
	I1212 00:15:11.021582   48339 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1212 00:15:11.021585   48339 command_runner.go:130] >       "confTemplate": "",
	I1212 00:15:11.021589   48339 command_runner.go:130] >       "ipPref": "",
	I1212 00:15:11.021592   48339 command_runner.go:130] >       "maxConfNum": 1,
	I1212 00:15:11.021597   48339 command_runner.go:130] >       "setupSerially": false,
	I1212 00:15:11.021601   48339 command_runner.go:130] >       "useInternalLoopback": false
	I1212 00:15:11.021604   48339 command_runner.go:130] >     },
	I1212 00:15:11.021610   48339 command_runner.go:130] >     "containerd": {
	I1212 00:15:11.021614   48339 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1212 00:15:11.021619   48339 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1212 00:15:11.021624   48339 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1212 00:15:11.021627   48339 command_runner.go:130] >       "runtimes": {
	I1212 00:15:11.021630   48339 command_runner.go:130] >         "runc": {
	I1212 00:15:11.021635   48339 command_runner.go:130] >           "ContainerAnnotations": null,
	I1212 00:15:11.021639   48339 command_runner.go:130] >           "PodAnnotations": null,
	I1212 00:15:11.021644   48339 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1212 00:15:11.021648   48339 command_runner.go:130] >           "cgroupWritable": false,
	I1212 00:15:11.021652   48339 command_runner.go:130] >           "cniConfDir": "",
	I1212 00:15:11.021656   48339 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1212 00:15:11.021664   48339 command_runner.go:130] >           "io_type": "",
	I1212 00:15:11.021670   48339 command_runner.go:130] >           "options": {
	I1212 00:15:11.021675   48339 command_runner.go:130] >             "BinaryName": "",
	I1212 00:15:11.021683   48339 command_runner.go:130] >             "CriuImagePath": "",
	I1212 00:15:11.021695   48339 command_runner.go:130] >             "CriuWorkPath": "",
	I1212 00:15:11.021703   48339 command_runner.go:130] >             "IoGid": 0,
	I1212 00:15:11.021708   48339 command_runner.go:130] >             "IoUid": 0,
	I1212 00:15:11.021712   48339 command_runner.go:130] >             "NoNewKeyring": false,
	I1212 00:15:11.021716   48339 command_runner.go:130] >             "Root": "",
	I1212 00:15:11.021723   48339 command_runner.go:130] >             "ShimCgroup": "",
	I1212 00:15:11.021728   48339 command_runner.go:130] >             "SystemdCgroup": false
	I1212 00:15:11.021734   48339 command_runner.go:130] >           },
	I1212 00:15:11.021739   48339 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1212 00:15:11.021745   48339 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1212 00:15:11.021749   48339 command_runner.go:130] >           "runtimePath": "",
	I1212 00:15:11.021755   48339 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1212 00:15:11.021761   48339 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1212 00:15:11.021765   48339 command_runner.go:130] >           "snapshotter": ""
	I1212 00:15:11.021770   48339 command_runner.go:130] >         }
	I1212 00:15:11.021774   48339 command_runner.go:130] >       }
	I1212 00:15:11.021778   48339 command_runner.go:130] >     },
	I1212 00:15:11.021790   48339 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1212 00:15:11.021799   48339 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1212 00:15:11.021805   48339 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1212 00:15:11.021810   48339 command_runner.go:130] >     "disableApparmor": false,
	I1212 00:15:11.021816   48339 command_runner.go:130] >     "disableHugetlbController": true,
	I1212 00:15:11.021821   48339 command_runner.go:130] >     "disableProcMount": false,
	I1212 00:15:11.021825   48339 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1212 00:15:11.021828   48339 command_runner.go:130] >     "enableCDI": true,
	I1212 00:15:11.021832   48339 command_runner.go:130] >     "enableSelinux": false,
	I1212 00:15:11.021840   48339 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1212 00:15:11.021845   48339 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1212 00:15:11.021852   48339 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1212 00:15:11.021858   48339 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1212 00:15:11.021868   48339 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1212 00:15:11.021873   48339 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1212 00:15:11.021877   48339 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1212 00:15:11.021886   48339 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1212 00:15:11.021890   48339 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1212 00:15:11.021896   48339 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1212 00:15:11.021901   48339 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1212 00:15:11.021907   48339 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1212 00:15:11.021910   48339 command_runner.go:130] >   },
	I1212 00:15:11.021914   48339 command_runner.go:130] >   "features": {
	I1212 00:15:11.021919   48339 command_runner.go:130] >     "supplemental_groups_policy": true
	I1212 00:15:11.021922   48339 command_runner.go:130] >   },
	I1212 00:15:11.021926   48339 command_runner.go:130] >   "golang": "go1.24.9",
	I1212 00:15:11.021938   48339 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1212 00:15:11.021951   48339 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1212 00:15:11.021954   48339 command_runner.go:130] >   "runtimeHandlers": [
	I1212 00:15:11.021957   48339 command_runner.go:130] >     {
	I1212 00:15:11.021961   48339 command_runner.go:130] >       "features": {
	I1212 00:15:11.021973   48339 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1212 00:15:11.021977   48339 command_runner.go:130] >         "user_namespaces": true
	I1212 00:15:11.021984   48339 command_runner.go:130] >       }
	I1212 00:15:11.021991   48339 command_runner.go:130] >     },
	I1212 00:15:11.021996   48339 command_runner.go:130] >     {
	I1212 00:15:11.022000   48339 command_runner.go:130] >       "features": {
	I1212 00:15:11.022006   48339 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1212 00:15:11.022013   48339 command_runner.go:130] >         "user_namespaces": true
	I1212 00:15:11.022016   48339 command_runner.go:130] >       },
	I1212 00:15:11.022021   48339 command_runner.go:130] >       "name": "runc"
	I1212 00:15:11.022026   48339 command_runner.go:130] >     }
	I1212 00:15:11.022029   48339 command_runner.go:130] >   ],
	I1212 00:15:11.022033   48339 command_runner.go:130] >   "status": {
	I1212 00:15:11.022045   48339 command_runner.go:130] >     "conditions": [
	I1212 00:15:11.022048   48339 command_runner.go:130] >       {
	I1212 00:15:11.022055   48339 command_runner.go:130] >         "message": "",
	I1212 00:15:11.022059   48339 command_runner.go:130] >         "reason": "",
	I1212 00:15:11.022065   48339 command_runner.go:130] >         "status": true,
	I1212 00:15:11.022070   48339 command_runner.go:130] >         "type": "RuntimeReady"
	I1212 00:15:11.022073   48339 command_runner.go:130] >       },
	I1212 00:15:11.022076   48339 command_runner.go:130] >       {
	I1212 00:15:11.022083   48339 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1212 00:15:11.022087   48339 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1212 00:15:11.022094   48339 command_runner.go:130] >         "status": false,
	I1212 00:15:11.022099   48339 command_runner.go:130] >         "type": "NetworkReady"
	I1212 00:15:11.022104   48339 command_runner.go:130] >       },
	I1212 00:15:11.022107   48339 command_runner.go:130] >       {
	I1212 00:15:11.022132   48339 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1212 00:15:11.022141   48339 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1212 00:15:11.022149   48339 command_runner.go:130] >         "status": false,
	I1212 00:15:11.022155   48339 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1212 00:15:11.022158   48339 command_runner.go:130] >       }
	I1212 00:15:11.022161   48339 command_runner.go:130] >     ]
	I1212 00:15:11.022164   48339 command_runner.go:130] >   }
	I1212 00:15:11.022166   48339 command_runner.go:130] > }
	I1212 00:15:11.024522   48339 cni.go:84] Creating CNI manager for ""
	I1212 00:15:11.024547   48339 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 00:15:11.024564   48339 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 00:15:11.024607   48339 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-767012 NodeName:functional-767012 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:15:11.024773   48339 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-767012"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:15:11.024850   48339 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 00:15:11.031979   48339 command_runner.go:130] > kubeadm
	I1212 00:15:11.031999   48339 command_runner.go:130] > kubectl
	I1212 00:15:11.032004   48339 command_runner.go:130] > kubelet
	I1212 00:15:11.033031   48339 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 00:15:11.033131   48339 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:15:11.041032   48339 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1212 00:15:11.054723   48339 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 00:15:11.067854   48339 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1212 00:15:11.081373   48339 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:15:11.085014   48339 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1212 00:15:11.085116   48339 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:15:11.226173   48339 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:15:12.035778   48339 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012 for IP: 192.168.49.2
	I1212 00:15:12.035798   48339 certs.go:195] generating shared ca certs ...
	I1212 00:15:12.035830   48339 certs.go:227] acquiring lock for ca certs: {Name:mk18ed2fce74cbc4ee01c0f71e2dbdd98ccce1cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:15:12.035967   48339 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key
	I1212 00:15:12.036010   48339 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key
	I1212 00:15:12.036017   48339 certs.go:257] generating profile certs ...
	I1212 00:15:12.036117   48339 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.key
	I1212 00:15:12.036165   48339 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.key.fcbff5a4
	I1212 00:15:12.036201   48339 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.key
	I1212 00:15:12.036209   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 00:15:12.036224   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 00:15:12.036235   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 00:15:12.036248   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 00:15:12.036258   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 00:15:12.036270   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 00:15:12.036281   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 00:15:12.036294   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 00:15:12.036341   48339 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem (1338 bytes)
	W1212 00:15:12.036372   48339 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290_empty.pem, impossibly tiny 0 bytes
	I1212 00:15:12.036381   48339 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 00:15:12.036409   48339 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem (1082 bytes)
	I1212 00:15:12.036440   48339 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:15:12.036468   48339 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem (1675 bytes)
	I1212 00:15:12.036516   48339 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem (1708 bytes)
	I1212 00:15:12.036546   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem -> /usr/share/ca-certificates/4290.pem
	I1212 00:15:12.036558   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem -> /usr/share/ca-certificates/42902.pem
	I1212 00:15:12.036578   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:15:12.037134   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:15:12.059224   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 00:15:12.079145   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:15:12.096868   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:15:12.114531   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 00:15:12.132828   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 00:15:12.150161   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:15:12.168014   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:15:12.185251   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem --> /usr/share/ca-certificates/4290.pem (1338 bytes)
	I1212 00:15:12.202557   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /usr/share/ca-certificates/42902.pem (1708 bytes)
	I1212 00:15:12.219625   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:15:12.237574   48339 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:15:12.250472   48339 ssh_runner.go:195] Run: openssl version
	I1212 00:15:12.256541   48339 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1212 00:15:12.256947   48339 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4290.pem
	I1212 00:15:12.264387   48339 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4290.pem /etc/ssl/certs/4290.pem
	I1212 00:15:12.271688   48339 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4290.pem
	I1212 00:15:12.275404   48339 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 00:06 /usr/share/ca-certificates/4290.pem
	I1212 00:15:12.275432   48339 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:06 /usr/share/ca-certificates/4290.pem
	I1212 00:15:12.275482   48339 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4290.pem
	I1212 00:15:12.315860   48339 command_runner.go:130] > 51391683
	I1212 00:15:12.316400   48339 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 00:15:12.323656   48339 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42902.pem
	I1212 00:15:12.330945   48339 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42902.pem /etc/ssl/certs/42902.pem
	I1212 00:15:12.339131   48339 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42902.pem
	I1212 00:15:12.343064   48339 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 00:06 /usr/share/ca-certificates/42902.pem
	I1212 00:15:12.343159   48339 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:06 /usr/share/ca-certificates/42902.pem
	I1212 00:15:12.343241   48339 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42902.pem
	I1212 00:15:12.383845   48339 command_runner.go:130] > 3ec20f2e
	I1212 00:15:12.384302   48339 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 00:15:12.391740   48339 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:15:12.398710   48339 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:15:12.406076   48339 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:15:12.409726   48339 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:15:12.409770   48339 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:15:12.409826   48339 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:15:12.450507   48339 command_runner.go:130] > b5213941
	I1212 00:15:12.450926   48339 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:15:12.458188   48339 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:15:12.461873   48339 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:15:12.461949   48339 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1212 00:15:12.461961   48339 command_runner.go:130] > Device: 259,1	Inode: 1311423     Links: 1
	I1212 00:15:12.461969   48339 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 00:15:12.461975   48339 command_runner.go:130] > Access: 2025-12-12 00:11:05.099200071 +0000
	I1212 00:15:12.461979   48339 command_runner.go:130] > Modify: 2025-12-12 00:07:00.969098600 +0000
	I1212 00:15:12.461984   48339 command_runner.go:130] > Change: 2025-12-12 00:07:00.969098600 +0000
	I1212 00:15:12.461989   48339 command_runner.go:130] >  Birth: 2025-12-12 00:07:00.969098600 +0000
	I1212 00:15:12.462077   48339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 00:15:12.504549   48339 command_runner.go:130] > Certificate will not expire
	I1212 00:15:12.505002   48339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 00:15:12.545847   48339 command_runner.go:130] > Certificate will not expire
	I1212 00:15:12.545927   48339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 00:15:12.586405   48339 command_runner.go:130] > Certificate will not expire
	I1212 00:15:12.586767   48339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 00:15:12.629151   48339 command_runner.go:130] > Certificate will not expire
	I1212 00:15:12.629637   48339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 00:15:12.671966   48339 command_runner.go:130] > Certificate will not expire
	I1212 00:15:12.672529   48339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 00:15:12.713858   48339 command_runner.go:130] > Certificate will not expire
	I1212 00:15:12.714272   48339 kubeadm.go:401] StartCluster: {Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:15:12.714367   48339 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1212 00:15:12.714442   48339 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:15:12.749902   48339 cri.go:89] found id: ""
	I1212 00:15:12.750000   48339 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:15:12.759407   48339 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1212 00:15:12.759429   48339 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1212 00:15:12.759437   48339 command_runner.go:130] > /var/lib/minikube/etcd:
	I1212 00:15:12.760379   48339 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 00:15:12.760398   48339 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 00:15:12.760457   48339 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 00:15:12.768161   48339 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:15:12.768602   48339 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-767012" does not appear in /home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 00:15:12.768706   48339 kubeconfig.go:62] /home/jenkins/minikube-integration/22101-2343/kubeconfig needs updating (will repair): [kubeconfig missing "functional-767012" cluster setting kubeconfig missing "functional-767012" context setting]
	I1212 00:15:12.769002   48339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/kubeconfig: {Name:mk3e2cbffa089f0a26351f5f6a49b8ed3b66edd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:15:12.769434   48339 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 00:15:12.769575   48339 kapi.go:59] client config for functional-767012: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt", KeyFile:"/home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.key", CAFile:"/home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 00:15:12.770098   48339 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 00:15:12.770119   48339 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 00:15:12.770125   48339 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 00:15:12.770129   48339 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 00:15:12.770134   48339 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 00:15:12.770402   48339 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 00:15:12.770508   48339 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1212 00:15:12.778529   48339 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1212 00:15:12.778562   48339 kubeadm.go:602] duration metric: took 18.158491ms to restartPrimaryControlPlane
	I1212 00:15:12.778572   48339 kubeadm.go:403] duration metric: took 64.30535ms to StartCluster
	I1212 00:15:12.778619   48339 settings.go:142] acquiring lock: {Name:mk6dd4250df69aeba4752e9f33aeef37272375c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:15:12.778710   48339 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 00:15:12.779343   48339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/kubeconfig: {Name:mk3e2cbffa089f0a26351f5f6a49b8ed3b66edd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:15:12.779578   48339 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1212 00:15:12.779758   48339 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 00:15:12.779798   48339 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 00:15:12.779860   48339 addons.go:70] Setting storage-provisioner=true in profile "functional-767012"
	I1212 00:15:12.779873   48339 addons.go:239] Setting addon storage-provisioner=true in "functional-767012"
	I1212 00:15:12.779899   48339 host.go:66] Checking if "functional-767012" exists ...
	I1212 00:15:12.780379   48339 cli_runner.go:164] Run: docker container inspect functional-767012 --format={{.State.Status}}
	I1212 00:15:12.780789   48339 addons.go:70] Setting default-storageclass=true in profile "functional-767012"
	I1212 00:15:12.780811   48339 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-767012"
	I1212 00:15:12.781090   48339 cli_runner.go:164] Run: docker container inspect functional-767012 --format={{.State.Status}}
	I1212 00:15:12.784774   48339 out.go:179] * Verifying Kubernetes components...
	I1212 00:15:12.788318   48339 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:15:12.822440   48339 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 00:15:12.822619   48339 kapi.go:59] client config for functional-767012: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt", KeyFile:"/home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.key", CAFile:"/home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 00:15:12.822882   48339 addons.go:239] Setting addon default-storageclass=true in "functional-767012"
	I1212 00:15:12.822910   48339 host.go:66] Checking if "functional-767012" exists ...
	I1212 00:15:12.823362   48339 cli_runner.go:164] Run: docker container inspect functional-767012 --format={{.State.Status}}
	I1212 00:15:12.828706   48339 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:15:12.831719   48339 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:12.831746   48339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 00:15:12.831810   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:12.856565   48339 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:12.856586   48339 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:15:12.856663   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:12.891591   48339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:15:12.907113   48339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:15:13.031282   48339 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:15:13.038860   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:13.055219   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:13.785959   48339 node_ready.go:35] waiting up to 6m0s for node "functional-767012" to be "Ready" ...
	I1212 00:15:13.786096   48339 type.go:168] "Request Body" body=""
	I1212 00:15:13.786201   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:13.786332   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:13.786513   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:13.786544   48339 retry.go:31] will retry after 252.334378ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:13.786634   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:13.786678   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:13.786692   48339 retry.go:31] will retry after 187.958053ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:13.786725   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:13.975259   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:14.039772   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:14.044477   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:14.044582   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:14.044648   48339 retry.go:31] will retry after 322.190642ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:14.103040   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:14.103100   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:14.103119   48339 retry.go:31] will retry after 449.616448ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:14.286283   48339 type.go:168] "Request Body" body=""
	I1212 00:15:14.286357   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:14.286666   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:14.367911   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:14.423058   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:14.426726   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:14.426805   48339 retry.go:31] will retry after 304.882295ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:14.552989   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:14.624219   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:14.624296   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:14.624324   48339 retry.go:31] will retry after 431.233251ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:14.732500   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:14.787073   48339 type.go:168] "Request Body" body=""
	I1212 00:15:14.787160   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:14.787408   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:14.793570   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:14.793617   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:14.793638   48339 retry.go:31] will retry after 814.242182ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:15.055819   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:15.115988   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:15.119844   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:15.119920   48339 retry.go:31] will retry after 1.173578041s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:15.287015   48339 type.go:168] "Request Body" body=""
	I1212 00:15:15.287127   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:15.287435   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:15.608995   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:15.668352   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:15.672074   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:15.672106   48339 retry.go:31] will retry after 987.735436ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:15.786224   48339 type.go:168] "Request Body" body=""
	I1212 00:15:15.786336   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:15.786676   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:15.786781   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:16.286218   48339 type.go:168] "Request Body" body=""
	I1212 00:15:16.286309   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:16.286618   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:16.293963   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:16.350242   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:16.354044   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:16.354074   48339 retry.go:31] will retry after 1.703488512s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:16.660633   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:16.720806   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:16.720847   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:16.720866   48339 retry.go:31] will retry after 1.717481089s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:16.787045   48339 type.go:168] "Request Body" body=""
	I1212 00:15:16.787165   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:16.787500   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:17.287197   48339 type.go:168] "Request Body" body=""
	I1212 00:15:17.287287   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:17.287663   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:17.786193   48339 type.go:168] "Request Body" body=""
	I1212 00:15:17.786301   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:17.786622   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:18.058032   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:18.119712   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:18.119758   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:18.119777   48339 retry.go:31] will retry after 2.564790813s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:18.286189   48339 type.go:168] "Request Body" body=""
	I1212 00:15:18.286256   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:18.286531   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:18.286571   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:18.438948   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:18.492343   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:18.495818   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:18.495853   48339 retry.go:31] will retry after 3.474173077s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:18.786235   48339 type.go:168] "Request Body" body=""
	I1212 00:15:18.786319   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:18.786633   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:19.286373   48339 type.go:168] "Request Body" body=""
	I1212 00:15:19.286489   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:19.286915   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:19.786192   48339 type.go:168] "Request Body" body=""
	I1212 00:15:19.786262   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:19.786531   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:20.286266   48339 type.go:168] "Request Body" body=""
	I1212 00:15:20.286338   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:20.286671   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:20.286730   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:20.685395   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:20.744336   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:20.744377   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:20.744397   48339 retry.go:31] will retry after 3.068053389s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:20.786556   48339 type.go:168] "Request Body" body=""
	I1212 00:15:20.786632   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:20.787017   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:21.286794   48339 type.go:168] "Request Body" body=""
	I1212 00:15:21.286863   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:21.287178   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:21.786938   48339 type.go:168] "Request Body" body=""
	I1212 00:15:21.787095   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:21.787425   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:21.970778   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:22.029300   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:22.033382   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:22.033416   48339 retry.go:31] will retry after 3.143683139s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:22.286887   48339 type.go:168] "Request Body" body=""
	I1212 00:15:22.286963   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:22.287298   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:22.287349   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:22.786122   48339 type.go:168] "Request Body" body=""
	I1212 00:15:22.786203   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:22.786515   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:23.286522   48339 type.go:168] "Request Body" body=""
	I1212 00:15:23.286595   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:23.286902   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:23.786669   48339 type.go:168] "Request Body" body=""
	I1212 00:15:23.786750   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:23.787071   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:23.813245   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:23.872447   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:23.872484   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:23.872503   48339 retry.go:31] will retry after 4.295118946s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:24.286878   48339 type.go:168] "Request Body" body=""
	I1212 00:15:24.286966   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:24.287236   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:24.787020   48339 type.go:168] "Request Body" body=""
	I1212 00:15:24.787113   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:24.787396   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:24.787455   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:25.178129   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:25.240141   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:25.243777   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:25.243806   48339 retry.go:31] will retry after 9.168145583s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:25.286119   48339 type.go:168] "Request Body" body=""
	I1212 00:15:25.286212   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:25.286559   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:25.787134   48339 type.go:168] "Request Body" body=""
	I1212 00:15:25.787314   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:25.787683   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:26.286268   48339 type.go:168] "Request Body" body=""
	I1212 00:15:26.286357   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:26.286692   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:26.786194   48339 type.go:168] "Request Body" body=""
	I1212 00:15:26.786286   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:26.786601   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:27.286932   48339 type.go:168] "Request Body" body=""
	I1212 00:15:27.287015   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:27.287267   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:27.287315   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:27.787077   48339 type.go:168] "Request Body" body=""
	I1212 00:15:27.787176   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:27.787513   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:28.168008   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:28.231881   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:28.231917   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:28.231944   48339 retry.go:31] will retry after 6.344313185s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:28.286314   48339 type.go:168] "Request Body" body=""
	I1212 00:15:28.286400   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:28.286700   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:28.786192   48339 type.go:168] "Request Body" body=""
	I1212 00:15:28.786267   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:28.786531   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:29.286238   48339 type.go:168] "Request Body" body=""
	I1212 00:15:29.286308   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:29.286623   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:29.786295   48339 type.go:168] "Request Body" body=""
	I1212 00:15:29.786368   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:29.786689   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:29.786753   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:30.287110   48339 type.go:168] "Request Body" body=""
	I1212 00:15:30.287175   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:30.287426   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:30.786872   48339 type.go:168] "Request Body" body=""
	I1212 00:15:30.786960   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:30.787297   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:31.286942   48339 type.go:168] "Request Body" body=""
	I1212 00:15:31.287032   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:31.287368   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:31.786980   48339 type.go:168] "Request Body" body=""
	I1212 00:15:31.787074   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:31.787418   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:31.787478   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:32.286186   48339 type.go:168] "Request Body" body=""
	I1212 00:15:32.286271   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:32.286599   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:32.786430   48339 type.go:168] "Request Body" body=""
	I1212 00:15:32.786534   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:32.786856   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:33.286674   48339 type.go:168] "Request Body" body=""
	I1212 00:15:33.286767   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:33.287049   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:33.786796   48339 type.go:168] "Request Body" body=""
	I1212 00:15:33.786868   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:33.787225   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:34.286903   48339 type.go:168] "Request Body" body=""
	I1212 00:15:34.287005   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:34.287348   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:34.287421   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:34.412873   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:34.471886   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:34.475429   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:34.475459   48339 retry.go:31] will retry after 5.427832253s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:34.576727   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:34.645023   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:34.645064   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:34.645084   48339 retry.go:31] will retry after 14.315988892s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:34.786162   48339 type.go:168] "Request Body" body=""
	I1212 00:15:34.786245   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:34.786506   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:35.286256   48339 type.go:168] "Request Body" body=""
	I1212 00:15:35.286369   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:35.286766   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:35.786480   48339 type.go:168] "Request Body" body=""
	I1212 00:15:35.786551   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:35.786861   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:36.286546   48339 type.go:168] "Request Body" body=""
	I1212 00:15:36.286613   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:36.286890   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:36.786199   48339 type.go:168] "Request Body" body=""
	I1212 00:15:36.786309   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:36.786640   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:36.786704   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:37.286243   48339 type.go:168] "Request Body" body=""
	I1212 00:15:37.286323   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:37.286640   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:37.786331   48339 type.go:168] "Request Body" body=""
	I1212 00:15:37.786426   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:37.786691   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:38.286739   48339 type.go:168] "Request Body" body=""
	I1212 00:15:38.286834   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:38.287212   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:38.787067   48339 type.go:168] "Request Body" body=""
	I1212 00:15:38.787165   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:38.787505   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:38.787556   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:39.286897   48339 type.go:168] "Request Body" body=""
	I1212 00:15:39.286974   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:39.287246   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:39.787072   48339 type.go:168] "Request Body" body=""
	I1212 00:15:39.787155   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:39.787481   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:39.903977   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:39.961517   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:39.961553   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:39.961584   48339 retry.go:31] will retry after 9.825060256s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:40.286904   48339 type.go:168] "Request Body" body=""
	I1212 00:15:40.287016   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:40.287324   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:40.786920   48339 type.go:168] "Request Body" body=""
	I1212 00:15:40.787007   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:40.787265   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:41.287079   48339 type.go:168] "Request Body" body=""
	I1212 00:15:41.287171   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:41.287483   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:41.287535   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:41.786177   48339 type.go:168] "Request Body" body=""
	I1212 00:15:41.786245   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:41.786586   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:42.286210   48339 type.go:168] "Request Body" body=""
	I1212 00:15:42.286304   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:42.286665   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:42.786373   48339 type.go:168] "Request Body" body=""
	I1212 00:15:42.786449   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:42.786735   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:43.286695   48339 type.go:168] "Request Body" body=""
	I1212 00:15:43.286781   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:43.287063   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:43.786792   48339 type.go:168] "Request Body" body=""
	I1212 00:15:43.786867   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:43.787142   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:43.787197   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:44.286976   48339 type.go:168] "Request Body" body=""
	I1212 00:15:44.287083   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:44.287398   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:44.786120   48339 type.go:168] "Request Body" body=""
	I1212 00:15:44.786194   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:44.786513   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:45.286282   48339 type.go:168] "Request Body" body=""
	I1212 00:15:45.286447   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:45.286824   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:45.786533   48339 type.go:168] "Request Body" body=""
	I1212 00:15:45.786632   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:45.786951   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:46.286792   48339 type.go:168] "Request Body" body=""
	I1212 00:15:46.286884   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:46.287186   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:46.287237   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:46.786874   48339 type.go:168] "Request Body" body=""
	I1212 00:15:46.786956   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:46.787268   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:47.287109   48339 type.go:168] "Request Body" body=""
	I1212 00:15:47.287201   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:47.287499   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:47.786233   48339 type.go:168] "Request Body" body=""
	I1212 00:15:47.786303   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:47.786629   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:48.286436   48339 type.go:168] "Request Body" body=""
	I1212 00:15:48.286503   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:48.286772   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:48.786216   48339 type.go:168] "Request Body" body=""
	I1212 00:15:48.786290   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:48.786671   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:48.786725   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:48.962079   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:49.024775   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:49.024824   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:49.024842   48339 retry.go:31] will retry after 15.053349185s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:49.286133   48339 type.go:168] "Request Body" body=""
	I1212 00:15:49.286218   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:49.286771   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:49.786188   48339 type.go:168] "Request Body" body=""
	I1212 00:15:49.786266   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:49.786639   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:49.786790   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:49.873069   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:49.873108   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:49.873126   48339 retry.go:31] will retry after 17.371130847s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:50.286878   48339 type.go:168] "Request Body" body=""
	I1212 00:15:50.286961   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:50.287310   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:50.787122   48339 type.go:168] "Request Body" body=""
	I1212 00:15:50.787202   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:50.787523   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:50.787579   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:51.286912   48339 type.go:168] "Request Body" body=""
	I1212 00:15:51.286981   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:51.287298   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:51.787059   48339 type.go:168] "Request Body" body=""
	I1212 00:15:51.787135   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:51.787456   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:52.286151   48339 type.go:168] "Request Body" body=""
	I1212 00:15:52.286226   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:52.286553   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:52.786336   48339 type.go:168] "Request Body" body=""
	I1212 00:15:52.786407   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:52.786699   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:53.286548   48339 type.go:168] "Request Body" body=""
	I1212 00:15:53.286619   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:53.286939   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:53.287009   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:53.786505   48339 type.go:168] "Request Body" body=""
	I1212 00:15:53.786577   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:53.786912   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:54.286719   48339 type.go:168] "Request Body" body=""
	I1212 00:15:54.286786   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:54.287059   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:54.786836   48339 type.go:168] "Request Body" body=""
	I1212 00:15:54.786933   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:54.787274   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:55.287094   48339 type.go:168] "Request Body" body=""
	I1212 00:15:55.287171   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:55.287511   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:55.287570   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:55.786152   48339 type.go:168] "Request Body" body=""
	I1212 00:15:55.786220   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:55.786474   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:56.286213   48339 type.go:168] "Request Body" body=""
	I1212 00:15:56.286312   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:56.286631   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:56.786168   48339 type.go:168] "Request Body" body=""
	I1212 00:15:56.786239   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:56.786561   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:57.287075   48339 type.go:168] "Request Body" body=""
	I1212 00:15:57.287147   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:57.287400   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:57.787153   48339 type.go:168] "Request Body" body=""
	I1212 00:15:57.787225   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:57.787534   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:57.787585   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:58.286375   48339 type.go:168] "Request Body" body=""
	I1212 00:15:58.286450   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:58.286783   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:58.786215   48339 type.go:168] "Request Body" body=""
	I1212 00:15:58.786282   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:58.786594   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:59.286241   48339 type.go:168] "Request Body" body=""
	I1212 00:15:59.286312   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:59.286622   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:59.786317   48339 type.go:168] "Request Body" body=""
	I1212 00:15:59.786388   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:59.786719   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:00.292274   48339 type.go:168] "Request Body" body=""
	I1212 00:16:00.292358   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:00.292654   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:00.292703   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:00.786205   48339 type.go:168] "Request Body" body=""
	I1212 00:16:00.786286   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:00.786644   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:01.286348   48339 type.go:168] "Request Body" body=""
	I1212 00:16:01.286432   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:01.286773   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:01.787146   48339 type.go:168] "Request Body" body=""
	I1212 00:16:01.787221   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:01.787510   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:02.286209   48339 type.go:168] "Request Body" body=""
	I1212 00:16:02.286300   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:02.286617   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:02.786467   48339 type.go:168] "Request Body" body=""
	I1212 00:16:02.786540   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:02.786883   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:02.786938   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:03.286672   48339 type.go:168] "Request Body" body=""
	I1212 00:16:03.286737   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:03.287012   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:03.786797   48339 type.go:168] "Request Body" body=""
	I1212 00:16:03.786868   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:03.787218   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:04.078782   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:16:04.137731   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:16:04.141181   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:16:04.141215   48339 retry.go:31] will retry after 17.411337884s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:16:04.286486   48339 type.go:168] "Request Body" body=""
	I1212 00:16:04.286564   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:04.286889   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:04.786185   48339 type.go:168] "Request Body" body=""
	I1212 00:16:04.786276   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:04.786662   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:05.286261   48339 type.go:168] "Request Body" body=""
	I1212 00:16:05.286336   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:05.286651   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:05.286703   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:05.786375   48339 type.go:168] "Request Body" body=""
	I1212 00:16:05.786467   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:05.786794   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:06.286188   48339 type.go:168] "Request Body" body=""
	I1212 00:16:06.286265   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:06.286589   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:06.786260   48339 type.go:168] "Request Body" body=""
	I1212 00:16:06.786341   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:06.786641   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:07.245320   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:16:07.286783   48339 type.go:168] "Request Body" body=""
	I1212 00:16:07.286895   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:07.287194   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:07.287250   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:07.304749   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:16:07.304789   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:16:07.304807   48339 retry.go:31] will retry after 24.953429831s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:16:07.787063   48339 type.go:168] "Request Body" body=""
	I1212 00:16:07.787138   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:07.787437   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:08.286404   48339 type.go:168] "Request Body" body=""
	I1212 00:16:08.286476   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:08.286783   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:08.786218   48339 type.go:168] "Request Body" body=""
	I1212 00:16:08.786293   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:08.786671   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:09.286981   48339 type.go:168] "Request Body" body=""
	I1212 00:16:09.287066   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:09.287329   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:09.287373   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:09.787100   48339 type.go:168] "Request Body" body=""
	I1212 00:16:09.787195   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:09.787534   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:10.286235   48339 type.go:168] "Request Body" body=""
	I1212 00:16:10.286321   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:10.286701   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:10.786206   48339 type.go:168] "Request Body" body=""
	I1212 00:16:10.786294   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:10.786608   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:11.286206   48339 type.go:168] "Request Body" body=""
	I1212 00:16:11.286280   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:11.286613   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:11.786187   48339 type.go:168] "Request Body" body=""
	I1212 00:16:11.786279   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:11.786620   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:11.786679   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:12.286942   48339 type.go:168] "Request Body" body=""
	I1212 00:16:12.287031   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:12.287292   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:12.786305   48339 type.go:168] "Request Body" body=""
	I1212 00:16:12.786379   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:12.786714   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:13.286636   48339 type.go:168] "Request Body" body=""
	I1212 00:16:13.286735   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:13.287061   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:13.786837   48339 type.go:168] "Request Body" body=""
	I1212 00:16:13.786905   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:13.787175   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:13.787217   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:14.286785   48339 type.go:168] "Request Body" body=""
	I1212 00:16:14.286860   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:14.287199   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:14.786985   48339 type.go:168] "Request Body" body=""
	I1212 00:16:14.787080   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:14.787391   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:15.287017   48339 type.go:168] "Request Body" body=""
	I1212 00:16:15.287092   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:15.287365   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:15.786104   48339 type.go:168] "Request Body" body=""
	I1212 00:16:15.786194   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:15.786515   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:16.286214   48339 type.go:168] "Request Body" body=""
	I1212 00:16:16.286312   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:16.286611   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:16.286662   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:16.787101   48339 type.go:168] "Request Body" body=""
	I1212 00:16:16.787177   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:16.787436   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:17.286203   48339 type.go:168] "Request Body" body=""
	I1212 00:16:17.286282   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:17.286588   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:17.786199   48339 type.go:168] "Request Body" body=""
	I1212 00:16:17.786273   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:17.786616   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:18.286463   48339 type.go:168] "Request Body" body=""
	I1212 00:16:18.286538   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:18.286889   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:18.286938   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:18.786189   48339 type.go:168] "Request Body" body=""
	I1212 00:16:18.786282   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:18.786626   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:19.286360   48339 type.go:168] "Request Body" body=""
	I1212 00:16:19.286434   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:19.286751   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:19.786162   48339 type.go:168] "Request Body" body=""
	I1212 00:16:19.786239   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:19.786514   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:20.286225   48339 type.go:168] "Request Body" body=""
	I1212 00:16:20.286301   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:20.286620   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:20.786210   48339 type.go:168] "Request Body" body=""
	I1212 00:16:20.786283   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:20.786562   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:20.786610   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:21.286154   48339 type.go:168] "Request Body" body=""
	I1212 00:16:21.286236   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:21.286508   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:21.552920   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:16:21.609312   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:16:21.612881   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:16:21.612910   48339 retry.go:31] will retry after 24.114548677s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:16:21.786128   48339 type.go:168] "Request Body" body=""
	I1212 00:16:21.786221   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:21.786547   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:22.286255   48339 type.go:168] "Request Body" body=""
	I1212 00:16:22.286336   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:22.286677   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:22.786457   48339 type.go:168] "Request Body" body=""
	I1212 00:16:22.786525   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:22.786771   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:22.786820   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:23.286766   48339 type.go:168] "Request Body" body=""
	I1212 00:16:23.286841   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:23.287234   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:23.787069   48339 type.go:168] "Request Body" body=""
	I1212 00:16:23.787143   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:23.787481   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:24.286171   48339 type.go:168] "Request Body" body=""
	I1212 00:16:24.286252   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:24.286533   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:24.786238   48339 type.go:168] "Request Body" body=""
	I1212 00:16:24.786310   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:24.786625   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:25.286352   48339 type.go:168] "Request Body" body=""
	I1212 00:16:25.286433   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:25.286738   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:25.286790   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:25.786139   48339 type.go:168] "Request Body" body=""
	I1212 00:16:25.786227   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:25.786511   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:26.286200   48339 type.go:168] "Request Body" body=""
	I1212 00:16:26.286292   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:26.286614   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:26.786309   48339 type.go:168] "Request Body" body=""
	I1212 00:16:26.786416   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:26.786728   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:27.286245   48339 type.go:168] "Request Body" body=""
	I1212 00:16:27.286316   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:27.286597   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:27.786283   48339 type.go:168] "Request Body" body=""
	I1212 00:16:27.786355   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:27.786690   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:27.786745   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:28.286519   48339 type.go:168] "Request Body" body=""
	I1212 00:16:28.286594   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:28.286931   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:28.786692   48339 type.go:168] "Request Body" body=""
	I1212 00:16:28.786765   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:28.787040   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:29.286807   48339 type.go:168] "Request Body" body=""
	I1212 00:16:29.286879   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:29.287246   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:29.786890   48339 type.go:168] "Request Body" body=""
	I1212 00:16:29.786966   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:29.787276   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:29.787321   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:30.287063   48339 type.go:168] "Request Body" body=""
	I1212 00:16:30.287137   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:30.287393   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:30.787118   48339 type.go:168] "Request Body" body=""
	I1212 00:16:30.787201   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:30.787551   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:31.286150   48339 type.go:168] "Request Body" body=""
	I1212 00:16:31.286271   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:31.286606   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:31.786159   48339 type.go:168] "Request Body" body=""
	I1212 00:16:31.786233   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:31.786502   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:32.259311   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:16:32.286776   48339 type.go:168] "Request Body" body=""
	I1212 00:16:32.286852   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:32.287141   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:32.287191   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:32.315690   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:16:32.319144   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:16:32.319251   48339 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 00:16:32.786146   48339 type.go:168] "Request Body" body=""
	I1212 00:16:32.786230   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:32.786571   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:33.286348   48339 type.go:168] "Request Body" body=""
	I1212 00:16:33.286423   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:33.286668   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:33.786187   48339 type.go:168] "Request Body" body=""
	I1212 00:16:33.786262   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:33.786597   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:34.286351   48339 type.go:168] "Request Body" body=""
	I1212 00:16:34.286425   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:34.286777   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:34.787084   48339 type.go:168] "Request Body" body=""
	I1212 00:16:34.787156   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:34.787405   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:34.787444   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:35.286102   48339 type.go:168] "Request Body" body=""
	I1212 00:16:35.286177   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:35.286533   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:35.786215   48339 type.go:168] "Request Body" body=""
	I1212 00:16:35.786285   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:35.786632   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:36.287087   48339 type.go:168] "Request Body" body=""
	I1212 00:16:36.287160   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:36.287418   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:36.786103   48339 type.go:168] "Request Body" body=""
	I1212 00:16:36.786193   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:36.786526   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:37.286129   48339 type.go:168] "Request Body" body=""
	I1212 00:16:37.286202   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:37.286544   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:37.286600   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:37.787026   48339 type.go:168] "Request Body" body=""
	I1212 00:16:37.787100   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:37.787357   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:38.286531   48339 type.go:168] "Request Body" body=""
	I1212 00:16:38.286611   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:38.286935   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:38.786684   48339 type.go:168] "Request Body" body=""
	I1212 00:16:38.786754   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:38.787096   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:39.286816   48339 type.go:168] "Request Body" body=""
	I1212 00:16:39.286887   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:39.287147   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:39.287187   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:39.786891   48339 type.go:168] "Request Body" body=""
	I1212 00:16:39.786969   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:39.787334   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:40.287018   48339 type.go:168] "Request Body" body=""
	I1212 00:16:40.287113   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:40.287426   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:40.786868   48339 type.go:168] "Request Body" body=""
	I1212 00:16:40.786934   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:40.787251   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:41.287087   48339 type.go:168] "Request Body" body=""
	I1212 00:16:41.287180   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:41.287508   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:41.287561   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:41.786226   48339 type.go:168] "Request Body" body=""
	I1212 00:16:41.786304   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:41.786661   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:42.286381   48339 type.go:168] "Request Body" body=""
	I1212 00:16:42.286463   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:42.286744   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:42.786456   48339 type.go:168] "Request Body" body=""
	I1212 00:16:42.786532   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:42.786873   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:43.286753   48339 type.go:168] "Request Body" body=""
	I1212 00:16:43.286834   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:43.287195   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:43.786972   48339 type.go:168] "Request Body" body=""
	I1212 00:16:43.787061   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:43.787340   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:43.787388   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:44.287150   48339 type.go:168] "Request Body" body=""
	I1212 00:16:44.287228   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:44.287570   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:44.786168   48339 type.go:168] "Request Body" body=""
	I1212 00:16:44.786239   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:44.786580   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:45.286154   48339 type.go:168] "Request Body" body=""
	I1212 00:16:45.286221   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:45.286507   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:45.728277   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:16:45.786458   48339 type.go:168] "Request Body" body=""
	I1212 00:16:45.786536   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:45.786800   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:45.788347   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:16:45.788381   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:16:45.788458   48339 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 00:16:45.791789   48339 out.go:179] * Enabled addons: 
	I1212 00:16:45.795459   48339 addons.go:530] duration metric: took 1m33.015656607s for enable addons: enabled=[]
	I1212 00:16:46.287010   48339 type.go:168] "Request Body" body=""
	I1212 00:16:46.287081   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:46.287404   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:46.287462   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:46.786103   48339 type.go:168] "Request Body" body=""
	I1212 00:16:46.786175   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:46.786467   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:47.286190   48339 type.go:168] "Request Body" body=""
	I1212 00:16:47.286259   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:47.286575   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:47.786211   48339 type.go:168] "Request Body" body=""
	I1212 00:16:47.786307   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:47.786638   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:48.286474   48339 type.go:168] "Request Body" body=""
	I1212 00:16:48.286546   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:48.286806   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:48.786468   48339 type.go:168] "Request Body" body=""
	I1212 00:16:48.786549   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:48.786891   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:48.786943   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:49.286477   48339 type.go:168] "Request Body" body=""
	I1212 00:16:49.286551   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:49.286848   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:49.786149   48339 type.go:168] "Request Body" body=""
	I1212 00:16:49.786225   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:49.786558   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:50.286220   48339 type.go:168] "Request Body" body=""
	I1212 00:16:50.286298   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:50.286632   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:50.786373   48339 type.go:168] "Request Body" body=""
	I1212 00:16:50.786482   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:50.786811   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:51.287106   48339 type.go:168] "Request Body" body=""
	I1212 00:16:51.287186   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:51.287452   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:51.287504   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:51.786168   48339 type.go:168] "Request Body" body=""
	I1212 00:16:51.786246   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:51.786652   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:52.286230   48339 type.go:168] "Request Body" body=""
	I1212 00:16:52.286316   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:52.286605   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:52.786447   48339 type.go:168] "Request Body" body=""
	I1212 00:16:52.786524   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:52.786794   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:53.286795   48339 type.go:168] "Request Body" body=""
	I1212 00:16:53.286881   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:53.287250   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:53.786893   48339 type.go:168] "Request Body" body=""
	I1212 00:16:53.786965   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:53.787310   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:53.787368   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:54.287009   48339 type.go:168] "Request Body" body=""
	I1212 00:16:54.287074   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:54.287399   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:54.787136   48339 type.go:168] "Request Body" body=""
	I1212 00:16:54.787210   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:54.787556   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:55.286167   48339 type.go:168] "Request Body" body=""
	I1212 00:16:55.286259   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:55.286627   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:55.787083   48339 type.go:168] "Request Body" body=""
	I1212 00:16:55.787159   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:55.787400   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:55.787438   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:56.286110   48339 type.go:168] "Request Body" body=""
	I1212 00:16:56.286192   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:56.286559   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:56.786120   48339 type.go:168] "Request Body" body=""
	I1212 00:16:56.786194   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:56.786507   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:57.286159   48339 type.go:168] "Request Body" body=""
	I1212 00:16:57.286235   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:57.286541   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:57.786199   48339 type.go:168] "Request Body" body=""
	I1212 00:16:57.786273   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:57.786608   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:58.286384   48339 type.go:168] "Request Body" body=""
	I1212 00:16:58.286456   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:58.286786   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:58.286842   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:58.786113   48339 type.go:168] "Request Body" body=""
	I1212 00:16:58.786195   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:58.786436   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:59.286107   48339 type.go:168] "Request Body" body=""
	I1212 00:16:59.286184   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:59.286539   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:59.786120   48339 type.go:168] "Request Body" body=""
	I1212 00:16:59.786208   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:59.786557   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:00.301305   48339 type.go:168] "Request Body" body=""
	I1212 00:17:00.301394   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:00.301705   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:00.301755   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:00.786915   48339 type.go:168] "Request Body" body=""
	I1212 00:17:00.787023   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:00.787365   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:01.286116   48339 type.go:168] "Request Body" body=""
	I1212 00:17:01.286201   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:01.286498   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:01.786331   48339 type.go:168] "Request Body" body=""
	I1212 00:17:01.786455   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:01.787063   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:02.286594   48339 type.go:168] "Request Body" body=""
	I1212 00:17:02.286683   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:02.287073   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:02.786476   48339 type.go:168] "Request Body" body=""
	I1212 00:17:02.786554   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:02.786843   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:02.786901   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:03.286851   48339 type.go:168] "Request Body" body=""
	I1212 00:17:03.286949   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:03.287380   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:03.787098   48339 type.go:168] "Request Body" body=""
	I1212 00:17:03.787174   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:03.787557   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:04.286231   48339 type.go:168] "Request Body" body=""
	I1212 00:17:04.286326   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:04.286645   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:04.786397   48339 type.go:168] "Request Body" body=""
	I1212 00:17:04.786491   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:04.786849   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:05.286553   48339 type.go:168] "Request Body" body=""
	I1212 00:17:05.286637   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:05.286984   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:05.287068   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:05.786188   48339 type.go:168] "Request Body" body=""
	I1212 00:17:05.786271   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:05.786551   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:06.286285   48339 type.go:168] "Request Body" body=""
	I1212 00:17:06.286367   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:06.286754   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:06.786511   48339 type.go:168] "Request Body" body=""
	I1212 00:17:06.786601   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:06.786964   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:07.286708   48339 type.go:168] "Request Body" body=""
	I1212 00:17:07.286779   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:07.287068   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:07.287118   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:07.786821   48339 type.go:168] "Request Body" body=""
	I1212 00:17:07.786901   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:07.787214   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:08.286844   48339 type.go:168] "Request Body" body=""
	I1212 00:17:08.286917   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:08.288380   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1212 00:17:08.786934   48339 type.go:168] "Request Body" body=""
	I1212 00:17:08.787026   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:08.787269   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:09.287048   48339 type.go:168] "Request Body" body=""
	I1212 00:17:09.287121   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:09.287442   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:09.287495   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:09.786147   48339 type.go:168] "Request Body" body=""
	I1212 00:17:09.786230   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:09.786586   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:10.286883   48339 type.go:168] "Request Body" body=""
	I1212 00:17:10.286956   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:10.287243   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:10.786958   48339 type.go:168] "Request Body" body=""
	I1212 00:17:10.787045   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:10.787380   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:11.287044   48339 type.go:168] "Request Body" body=""
	I1212 00:17:11.287119   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:11.287444   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:11.786125   48339 type.go:168] "Request Body" body=""
	I1212 00:17:11.786193   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:11.786444   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:11.786489   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:12.286149   48339 type.go:168] "Request Body" body=""
	I1212 00:17:12.286229   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:12.286580   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:12.786352   48339 type.go:168] "Request Body" body=""
	I1212 00:17:12.786428   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:12.786688   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:13.286596   48339 type.go:168] "Request Body" body=""
	I1212 00:17:13.286663   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:13.286919   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:13.786173   48339 type.go:168] "Request Body" body=""
	I1212 00:17:13.786241   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:13.786564   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:13.786616   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:14.286270   48339 type.go:168] "Request Body" body=""
	I1212 00:17:14.286348   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:14.286675   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:14.786352   48339 type.go:168] "Request Body" body=""
	I1212 00:17:14.786428   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:14.786687   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:15.286227   48339 type.go:168] "Request Body" body=""
	I1212 00:17:15.286303   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:15.286628   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:15.786172   48339 type.go:168] "Request Body" body=""
	I1212 00:17:15.786249   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:15.786573   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:16.286101   48339 type.go:168] "Request Body" body=""
	I1212 00:17:16.286166   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:16.286405   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:16.286442   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:16.786141   48339 type.go:168] "Request Body" body=""
	I1212 00:17:16.786209   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:16.786499   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:17.286249   48339 type.go:168] "Request Body" body=""
	I1212 00:17:17.286330   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:17.286684   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:17.786983   48339 type.go:168] "Request Body" body=""
	I1212 00:17:17.787073   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:17.787361   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:18.286161   48339 type.go:168] "Request Body" body=""
	I1212 00:17:18.286235   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:18.286595   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:18.286655   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:18.786181   48339 type.go:168] "Request Body" body=""
	I1212 00:17:18.786265   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:18.786618   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:19.286155   48339 type.go:168] "Request Body" body=""
	I1212 00:17:19.286234   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:19.286527   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:19.786171   48339 type.go:168] "Request Body" body=""
	I1212 00:17:19.786268   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:19.786580   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:20.286265   48339 type.go:168] "Request Body" body=""
	I1212 00:17:20.286345   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:20.286667   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:20.286729   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:20.786253   48339 type.go:168] "Request Body" body=""
	I1212 00:17:20.786335   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:20.786585   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:21.286248   48339 type.go:168] "Request Body" body=""
	I1212 00:17:21.286349   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:21.286645   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:21.786360   48339 type.go:168] "Request Body" body=""
	I1212 00:17:21.786432   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:21.786770   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:22.286447   48339 type.go:168] "Request Body" body=""
	I1212 00:17:22.286522   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:22.286821   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:22.286872   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:22.786477   48339 type.go:168] "Request Body" body=""
	I1212 00:17:22.786551   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:22.786870   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:23.286630   48339 type.go:168] "Request Body" body=""
	I1212 00:17:23.286708   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:23.287045   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:23.786799   48339 type.go:168] "Request Body" body=""
	I1212 00:17:23.786866   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:23.787137   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:24.286984   48339 type.go:168] "Request Body" body=""
	I1212 00:17:24.287110   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:24.287379   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:24.287422   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:24.787166   48339 type.go:168] "Request Body" body=""
	I1212 00:17:24.787236   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:24.787551   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:25.286131   48339 type.go:168] "Request Body" body=""
	I1212 00:17:25.286198   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:25.286515   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:25.786175   48339 type.go:168] "Request Body" body=""
	I1212 00:17:25.786258   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:25.786585   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:26.286285   48339 type.go:168] "Request Body" body=""
	I1212 00:17:26.286371   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:26.286713   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:26.786399   48339 type.go:168] "Request Body" body=""
	I1212 00:17:26.786473   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:26.786722   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:26.786769   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:27.286222   48339 type.go:168] "Request Body" body=""
	I1212 00:17:27.286300   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:27.286683   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:27.786241   48339 type.go:168] "Request Body" body=""
	I1212 00:17:27.786319   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:27.786666   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:28.286480   48339 type.go:168] "Request Body" body=""
	I1212 00:17:28.286553   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:28.286814   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:28.786177   48339 type.go:168] "Request Body" body=""
	I1212 00:17:28.786245   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:28.786593   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:29.286294   48339 type.go:168] "Request Body" body=""
	I1212 00:17:29.286373   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:29.286698   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:29.286749   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:29.787059   48339 type.go:168] "Request Body" body=""
	I1212 00:17:29.787135   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:29.787388   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:30.287153   48339 type.go:168] "Request Body" body=""
	I1212 00:17:30.287233   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:30.287571   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:30.786130   48339 type.go:168] "Request Body" body=""
	I1212 00:17:30.786208   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:30.786533   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:31.286233   48339 type.go:168] "Request Body" body=""
	I1212 00:17:31.286304   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:31.286552   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:31.786198   48339 type.go:168] "Request Body" body=""
	I1212 00:17:31.786272   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:31.786658   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:31.786711   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:32.286228   48339 type.go:168] "Request Body" body=""
	I1212 00:17:32.286302   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:32.286672   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:32.786431   48339 type.go:168] "Request Body" body=""
	I1212 00:17:32.786501   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:32.786749   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:33.286661   48339 type.go:168] "Request Body" body=""
	I1212 00:17:33.286739   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:33.287070   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:33.786835   48339 type.go:168] "Request Body" body=""
	I1212 00:17:33.786916   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:33.787267   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:33.787323   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:34.287052   48339 type.go:168] "Request Body" body=""
	I1212 00:17:34.287118   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:34.287368   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:34.787078   48339 type.go:168] "Request Body" body=""
	I1212 00:17:34.787151   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:34.787466   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:35.286168   48339 type.go:168] "Request Body" body=""
	I1212 00:17:35.286247   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:35.286575   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:35.787150   48339 type.go:168] "Request Body" body=""
	I1212 00:17:35.787215   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:35.787459   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:35.787500   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:36.286166   48339 type.go:168] "Request Body" body=""
	I1212 00:17:36.286238   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:36.286556   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:36.786086   48339 type.go:168] "Request Body" body=""
	I1212 00:17:36.786158   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:36.786493   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:37.287083   48339 type.go:168] "Request Body" body=""
	I1212 00:17:37.287149   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:37.287400   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:37.786113   48339 type.go:168] "Request Body" body=""
	I1212 00:17:37.786187   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:37.786493   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:38.286449   48339 type.go:168] "Request Body" body=""
	I1212 00:17:38.286532   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:38.286863   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:38.286918   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:38.786428   48339 type.go:168] "Request Body" body=""
	I1212 00:17:38.786493   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:38.786739   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:39.286231   48339 type.go:168] "Request Body" body=""
	I1212 00:17:39.286328   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:39.286669   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:39.786191   48339 type.go:168] "Request Body" body=""
	I1212 00:17:39.786261   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:39.786574   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:40.286097   48339 type.go:168] "Request Body" body=""
	I1212 00:17:40.286176   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:40.286475   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:40.786241   48339 type.go:168] "Request Body" body=""
	I1212 00:17:40.786314   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:40.786667   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:40.786722   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:41.286407   48339 type.go:168] "Request Body" body=""
	I1212 00:17:41.286483   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:41.286793   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:41.786385   48339 type.go:168] "Request Body" body=""
	I1212 00:17:41.786504   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:41.786782   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:42.286235   48339 type.go:168] "Request Body" body=""
	I1212 00:17:42.286314   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:42.286656   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:42.786517   48339 type.go:168] "Request Body" body=""
	I1212 00:17:42.786601   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:42.786955   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:42.787030   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:43.286740   48339 type.go:168] "Request Body" body=""
	I1212 00:17:43.286811   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:43.287101   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:43.786897   48339 type.go:168] "Request Body" body=""
	I1212 00:17:43.786970   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:43.787283   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:44.287080   48339 type.go:168] "Request Body" body=""
	I1212 00:17:44.287151   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:44.287449   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:44.786108   48339 type.go:168] "Request Body" body=""
	I1212 00:17:44.786188   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:44.786505   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:45.286236   48339 type.go:168] "Request Body" body=""
	I1212 00:17:45.286337   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:45.286642   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:45.286697   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:45.786185   48339 type.go:168] "Request Body" body=""
	I1212 00:17:45.786259   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:45.786591   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:46.286218   48339 type.go:168] "Request Body" body=""
	I1212 00:17:46.286326   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:46.286644   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:46.786217   48339 type.go:168] "Request Body" body=""
	I1212 00:17:46.786289   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:46.786616   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:47.286247   48339 type.go:168] "Request Body" body=""
	I1212 00:17:47.286340   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:47.286709   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:47.286768   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:47.786186   48339 type.go:168] "Request Body" body=""
	I1212 00:17:47.786263   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:47.786590   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:48.286576   48339 type.go:168] "Request Body" body=""
	I1212 00:17:48.286657   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:48.287040   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:48.786801   48339 type.go:168] "Request Body" body=""
	I1212 00:17:48.786875   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:48.787271   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:49.287049   48339 type.go:168] "Request Body" body=""
	I1212 00:17:49.287121   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:49.287376   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:49.287415   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:49.786455   48339 type.go:168] "Request Body" body=""
	I1212 00:17:49.786542   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:49.786946   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:50.286236   48339 type.go:168] "Request Body" body=""
	I1212 00:17:50.286337   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:50.286768   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:50.787077   48339 type.go:168] "Request Body" body=""
	I1212 00:17:50.787161   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:50.787441   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:51.286171   48339 type.go:168] "Request Body" body=""
	I1212 00:17:51.286244   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:51.286582   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:51.786669   48339 type.go:168] "Request Body" body=""
	I1212 00:17:51.786740   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:51.787072   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:51.787128   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:52.286721   48339 type.go:168] "Request Body" body=""
	I1212 00:17:52.286792   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:52.287074   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:52.787058   48339 type.go:168] "Request Body" body=""
	I1212 00:17:52.787135   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:52.787466   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:53.286395   48339 type.go:168] "Request Body" body=""
	I1212 00:17:53.286475   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:53.286789   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:53.786172   48339 type.go:168] "Request Body" body=""
	I1212 00:17:53.786242   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:53.786578   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:54.286214   48339 type.go:168] "Request Body" body=""
	I1212 00:17:54.286284   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:54.286634   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:54.286688   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:54.786336   48339 type.go:168] "Request Body" body=""
	I1212 00:17:54.786415   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:54.786747   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:55.287099   48339 type.go:168] "Request Body" body=""
	I1212 00:17:55.287165   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:55.287421   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:55.787184   48339 type.go:168] "Request Body" body=""
	I1212 00:17:55.787260   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:55.787579   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:56.286208   48339 type.go:168] "Request Body" body=""
	I1212 00:17:56.286283   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:56.286616   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:56.786874   48339 type.go:168] "Request Body" body=""
	I1212 00:17:56.786946   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:56.787207   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:56.787260   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:57.286785   48339 type.go:168] "Request Body" body=""
	I1212 00:17:57.286872   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:57.287249   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:57.786907   48339 type.go:168] "Request Body" body=""
	I1212 00:17:57.786979   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:57.787325   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:58.287084   48339 type.go:168] "Request Body" body=""
	I1212 00:17:58.287156   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:58.287408   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:58.787171   48339 type.go:168] "Request Body" body=""
	I1212 00:17:58.787247   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:58.787569   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:58.787624   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:59.286220   48339 type.go:168] "Request Body" body=""
	I1212 00:17:59.286295   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:59.286637   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:59.786160   48339 type.go:168] "Request Body" body=""
	I1212 00:17:59.786226   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:59.786481   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:00.286342   48339 type.go:168] "Request Body" body=""
	I1212 00:18:00.286424   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:00.286745   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:00.786411   48339 type.go:168] "Request Body" body=""
	I1212 00:18:00.786487   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:00.786799   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:01.286485   48339 type.go:168] "Request Body" body=""
	I1212 00:18:01.286554   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:01.286822   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:01.286864   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:01.786179   48339 type.go:168] "Request Body" body=""
	I1212 00:18:01.786249   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:01.786559   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:02.286272   48339 type.go:168] "Request Body" body=""
	I1212 00:18:02.286352   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:02.286681   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:02.786397   48339 type.go:168] "Request Body" body=""
	I1212 00:18:02.786473   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:02.786729   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:03.286684   48339 type.go:168] "Request Body" body=""
	I1212 00:18:03.286756   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:03.287062   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:03.287108   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:03.786757   48339 type.go:168] "Request Body" body=""
	I1212 00:18:03.786848   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:03.787220   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:04.286890   48339 type.go:168] "Request Body" body=""
	I1212 00:18:04.286971   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:04.287276   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:04.787022   48339 type.go:168] "Request Body" body=""
	I1212 00:18:04.787101   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:04.787413   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:05.286164   48339 type.go:168] "Request Body" body=""
	I1212 00:18:05.286245   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:05.286589   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:05.786893   48339 type.go:168] "Request Body" body=""
	I1212 00:18:05.786965   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:05.787232   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:05.787272   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:06.287096   48339 type.go:168] "Request Body" body=""
	I1212 00:18:06.287189   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:06.287596   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:06.786293   48339 type.go:168] "Request Body" body=""
	I1212 00:18:06.786366   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:06.786687   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:07.286878   48339 type.go:168] "Request Body" body=""
	I1212 00:18:07.286943   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:07.287205   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:07.786915   48339 type.go:168] "Request Body" body=""
	I1212 00:18:07.786985   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:07.787328   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:07.787380   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:08.286823   48339 type.go:168] "Request Body" body=""
	I1212 00:18:08.286912   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:08.287273   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:08.786885   48339 type.go:168] "Request Body" body=""
	I1212 00:18:08.786957   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:08.787238   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:09.286257   48339 type.go:168] "Request Body" body=""
	I1212 00:18:09.286349   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:09.286656   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:09.786355   48339 type.go:168] "Request Body" body=""
	I1212 00:18:09.786440   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:09.786773   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:10.286467   48339 type.go:168] "Request Body" body=""
	I1212 00:18:10.286571   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:10.286828   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:10.286869   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:10.786201   48339 type.go:168] "Request Body" body=""
	I1212 00:18:10.786280   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:10.786615   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:11.286318   48339 type.go:168] "Request Body" body=""
	I1212 00:18:11.286395   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:11.286719   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:11.786408   48339 type.go:168] "Request Body" body=""
	I1212 00:18:11.786479   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:11.786752   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:12.286228   48339 type.go:168] "Request Body" body=""
	I1212 00:18:12.286305   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:12.286693   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:12.786445   48339 type.go:168] "Request Body" body=""
	I1212 00:18:12.786529   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:12.786847   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:12.786901   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:13.286863   48339 type.go:168] "Request Body" body=""
	I1212 00:18:13.286936   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:13.287242   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:13.786941   48339 type.go:168] "Request Body" body=""
	I1212 00:18:13.787040   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:13.787410   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:14.287037   48339 type.go:168] "Request Body" body=""
	I1212 00:18:14.287114   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:14.287432   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:14.786139   48339 type.go:168] "Request Body" body=""
	I1212 00:18:14.786211   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:14.786471   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:15.286165   48339 type.go:168] "Request Body" body=""
	I1212 00:18:15.286243   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:15.286559   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:15.286619   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:15.786275   48339 type.go:168] "Request Body" body=""
	I1212 00:18:15.786355   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:15.786707   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:16.286358   48339 type.go:168] "Request Body" body=""
	I1212 00:18:16.286435   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:16.286754   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:16.786213   48339 type.go:168] "Request Body" body=""
	I1212 00:18:16.786285   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:16.786626   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:17.286235   48339 type.go:168] "Request Body" body=""
	I1212 00:18:17.286316   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:17.286711   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:17.286765   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:17.786210   48339 type.go:168] "Request Body" body=""
	I1212 00:18:17.786299   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:17.786594   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:18.286667   48339 type.go:168] "Request Body" body=""
	I1212 00:18:18.286745   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:18.287093   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:18.786871   48339 type.go:168] "Request Body" body=""
	I1212 00:18:18.786957   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:18.787347   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:19.287118   48339 type.go:168] "Request Body" body=""
	I1212 00:18:19.287189   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:19.287538   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:19.287598   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:19.786173   48339 type.go:168] "Request Body" body=""
	I1212 00:18:19.786250   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:19.786591   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:20.286290   48339 type.go:168] "Request Body" body=""
	I1212 00:18:20.286368   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:20.286732   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:20.786421   48339 type.go:168] "Request Body" body=""
	I1212 00:18:20.786496   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:20.786769   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:21.286229   48339 type.go:168] "Request Body" body=""
	I1212 00:18:21.286299   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:21.286631   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:21.786238   48339 type.go:168] "Request Body" body=""
	I1212 00:18:21.786325   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:21.786704   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:21.786756   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:22.286201   48339 type.go:168] "Request Body" body=""
	I1212 00:18:22.286267   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:22.286513   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:22.786439   48339 type.go:168] "Request Body" body=""
	I1212 00:18:22.786511   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:22.786820   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:23.286747   48339 type.go:168] "Request Body" body=""
	I1212 00:18:23.286828   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:23.287136   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:23.786886   48339 type.go:168] "Request Body" body=""
	I1212 00:18:23.786958   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:23.787219   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:23.787272   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:24.287069   48339 type.go:168] "Request Body" body=""
	I1212 00:18:24.287145   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:24.287464   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:24.787101   48339 type.go:168] "Request Body" body=""
	I1212 00:18:24.787205   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:24.787503   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:25.286157   48339 type.go:168] "Request Body" body=""
	I1212 00:18:25.286231   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:25.286484   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:25.786179   48339 type.go:168] "Request Body" body=""
	I1212 00:18:25.786250   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:25.786581   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:26.286254   48339 type.go:168] "Request Body" body=""
	I1212 00:18:26.286329   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:26.286638   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:26.286693   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:26.786132   48339 type.go:168] "Request Body" body=""
	I1212 00:18:26.786199   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:26.786452   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:27.286167   48339 type.go:168] "Request Body" body=""
	I1212 00:18:27.286240   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:27.286520   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:27.786148   48339 type.go:168] "Request Body" body=""
	I1212 00:18:27.786250   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:27.786565   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:28.286477   48339 type.go:168] "Request Body" body=""
	I1212 00:18:28.286544   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:28.286801   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:28.286842   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:28.786195   48339 type.go:168] "Request Body" body=""
	I1212 00:18:28.786269   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:28.786563   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:29.286151   48339 type.go:168] "Request Body" body=""
	I1212 00:18:29.286228   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:29.286541   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:29.786783   48339 type.go:168] "Request Body" body=""
	I1212 00:18:29.786859   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:29.787122   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:30.286875   48339 type.go:168] "Request Body" body=""
	I1212 00:18:30.286953   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:30.287291   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:30.287342   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:30.786921   48339 type.go:168] "Request Body" body=""
	I1212 00:18:30.787054   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:30.787386   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:31.287040   48339 type.go:168] "Request Body" body=""
	I1212 00:18:31.287113   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:31.287420   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:31.786111   48339 type.go:168] "Request Body" body=""
	I1212 00:18:31.786190   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:31.786534   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:32.286242   48339 type.go:168] "Request Body" body=""
	I1212 00:18:32.286317   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:32.286644   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:32.787099   48339 type.go:168] "Request Body" body=""
	I1212 00:18:32.787169   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:32.787444   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:32.787485   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:33.286455   48339 type.go:168] "Request Body" body=""
	I1212 00:18:33.286531   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:33.286867   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:33.786185   48339 type.go:168] "Request Body" body=""
	I1212 00:18:33.786263   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:33.786599   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:34.287030   48339 type.go:168] "Request Body" body=""
	I1212 00:18:34.287101   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:34.287356   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:34.787107   48339 type.go:168] "Request Body" body=""
	I1212 00:18:34.787178   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:34.787462   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:34.787506   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:35.287151   48339 type.go:168] "Request Body" body=""
	I1212 00:18:35.287227   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:35.287561   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:35.786156   48339 type.go:168] "Request Body" body=""
	I1212 00:18:35.786227   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:35.786476   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:36.286225   48339 type.go:168] "Request Body" body=""
	I1212 00:18:36.286302   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:36.286658   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:36.786364   48339 type.go:168] "Request Body" body=""
	I1212 00:18:36.786441   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:36.786776   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:37.287081   48339 type.go:168] "Request Body" body=""
	I1212 00:18:37.287160   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:37.287429   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:37.287479   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:37.786116   48339 type.go:168] "Request Body" body=""
	I1212 00:18:37.786189   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:37.786493   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:38.286438   48339 type.go:168] "Request Body" body=""
	I1212 00:18:38.286517   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:38.286835   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:38.786180   48339 type.go:168] "Request Body" body=""
	I1212 00:18:38.786274   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:38.786571   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:39.286206   48339 type.go:168] "Request Body" body=""
	I1212 00:18:39.286282   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:39.286612   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:39.786192   48339 type.go:168] "Request Body" body=""
	I1212 00:18:39.786279   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:39.786630   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:39.786682   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:40.286354   48339 type.go:168] "Request Body" body=""
	I1212 00:18:40.286444   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:40.286835   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:40.786204   48339 type.go:168] "Request Body" body=""
	I1212 00:18:40.786287   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:40.786601   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:41.286231   48339 type.go:168] "Request Body" body=""
	I1212 00:18:41.286307   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:41.286653   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:41.786899   48339 type.go:168] "Request Body" body=""
	I1212 00:18:41.787023   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:41.787291   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:41.787331   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:42.287103   48339 type.go:168] "Request Body" body=""
	I1212 00:18:42.287183   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:42.287534   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:42.786413   48339 type.go:168] "Request Body" body=""
	I1212 00:18:42.786496   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:42.786838   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:43.286712   48339 type.go:168] "Request Body" body=""
	I1212 00:18:43.286788   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:43.287076   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:43.786845   48339 type.go:168] "Request Body" body=""
	I1212 00:18:43.786921   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:43.787255   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:44.287058   48339 type.go:168] "Request Body" body=""
	I1212 00:18:44.287145   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:44.287474   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:44.287531   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:44.786152   48339 type.go:168] "Request Body" body=""
	I1212 00:18:44.786226   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:44.786558   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:45.286226   48339 type.go:168] "Request Body" body=""
	I1212 00:18:45.286300   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:45.286609   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:45.786194   48339 type.go:168] "Request Body" body=""
	I1212 00:18:45.786265   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:45.786613   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:46.287075   48339 type.go:168] "Request Body" body=""
	I1212 00:18:46.287143   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:46.287427   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:46.786109   48339 type.go:168] "Request Body" body=""
	I1212 00:18:46.786181   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:46.786497   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:46.786555   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:47.286231   48339 type.go:168] "Request Body" body=""
	I1212 00:18:47.286325   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:47.286637   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:47.786326   48339 type.go:168] "Request Body" body=""
	I1212 00:18:47.786398   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:47.786701   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:48.286663   48339 type.go:168] "Request Body" body=""
	I1212 00:18:48.286736   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:48.287070   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:48.786872   48339 type.go:168] "Request Body" body=""
	I1212 00:18:48.786951   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:48.787298   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:48.787351   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:49.287060   48339 type.go:168] "Request Body" body=""
	I1212 00:18:49.287138   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:49.287405   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:49.786097   48339 type.go:168] "Request Body" body=""
	I1212 00:18:49.786175   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:49.786470   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:50.286223   48339 type.go:168] "Request Body" body=""
	I1212 00:18:50.286298   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:50.286634   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:50.786914   48339 type.go:168] "Request Body" body=""
	I1212 00:18:50.786986   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:50.787320   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:50.787380   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:51.287127   48339 type.go:168] "Request Body" body=""
	I1212 00:18:51.287204   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:51.287530   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:51.786096   48339 type.go:168] "Request Body" body=""
	I1212 00:18:51.786170   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:51.786513   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:52.286949   48339 type.go:168] "Request Body" body=""
	I1212 00:18:52.287031   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:52.287290   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:52.786331   48339 type.go:168] "Request Body" body=""
	I1212 00:18:52.786411   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:52.786755   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:53.286620   48339 type.go:168] "Request Body" body=""
	I1212 00:18:53.286694   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:53.287034   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:53.287095   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:53.786805   48339 type.go:168] "Request Body" body=""
	I1212 00:18:53.786875   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:53.787154   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:54.286914   48339 type.go:168] "Request Body" body=""
	I1212 00:18:54.286986   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:54.287311   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:54.787067   48339 type.go:168] "Request Body" body=""
	I1212 00:18:54.787140   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:54.787481   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:55.287095   48339 type.go:168] "Request Body" body=""
	I1212 00:18:55.287162   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:55.287415   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:55.287454   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:55.786091   48339 type.go:168] "Request Body" body=""
	I1212 00:18:55.786159   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:55.786468   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:56.286160   48339 type.go:168] "Request Body" body=""
	I1212 00:18:56.286232   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:56.286551   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:56.786800   48339 type.go:168] "Request Body" body=""
	I1212 00:18:56.786866   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:56.787137   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:57.286892   48339 type.go:168] "Request Body" body=""
	I1212 00:18:57.286971   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:57.287328   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:57.787142   48339 type.go:168] "Request Body" body=""
	I1212 00:18:57.787233   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:57.787583   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:57.787634   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:58.286388   48339 type.go:168] "Request Body" body=""
	I1212 00:18:58.286461   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:58.286718   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:58.786377   48339 type.go:168] "Request Body" body=""
	I1212 00:18:58.786448   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:58.786805   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:59.286505   48339 type.go:168] "Request Body" body=""
	I1212 00:18:59.286587   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:59.286890   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:59.786144   48339 type.go:168] "Request Body" body=""
	I1212 00:18:59.786225   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:59.786592   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:00.286264   48339 type.go:168] "Request Body" body=""
	I1212 00:19:00.286345   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:00.286656   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:00.286735   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:00.786383   48339 type.go:168] "Request Body" body=""
	I1212 00:19:00.786458   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:00.786791   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:01.286179   48339 type.go:168] "Request Body" body=""
	I1212 00:19:01.286250   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:01.286584   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:01.786253   48339 type.go:168] "Request Body" body=""
	I1212 00:19:01.786329   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:01.786671   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:02.286241   48339 type.go:168] "Request Body" body=""
	I1212 00:19:02.286317   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:02.286672   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:02.786408   48339 type.go:168] "Request Body" body=""
	I1212 00:19:02.786476   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:02.786723   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:02.786763   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:03.286700   48339 type.go:168] "Request Body" body=""
	I1212 00:19:03.286795   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:03.287188   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:03.787019   48339 type.go:168] "Request Body" body=""
	I1212 00:19:03.787097   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:03.787433   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:04.286096   48339 type.go:168] "Request Body" body=""
	I1212 00:19:04.286175   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:04.286490   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:04.786196   48339 type.go:168] "Request Body" body=""
	I1212 00:19:04.786274   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:04.786601   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:05.286296   48339 type.go:168] "Request Body" body=""
	I1212 00:19:05.286371   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:05.286696   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:05.286753   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:05.786175   48339 type.go:168] "Request Body" body=""
	I1212 00:19:05.786254   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:05.786601   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:06.286229   48339 type.go:168] "Request Body" body=""
	I1212 00:19:06.286306   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:06.286600   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:06.786191   48339 type.go:168] "Request Body" body=""
	I1212 00:19:06.786264   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:06.786580   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:07.286119   48339 type.go:168] "Request Body" body=""
	I1212 00:19:07.286199   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:07.286473   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:07.786186   48339 type.go:168] "Request Body" body=""
	I1212 00:19:07.786259   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:07.786536   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:07.786581   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:08.286383   48339 type.go:168] "Request Body" body=""
	I1212 00:19:08.286463   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:08.286917   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:08.786174   48339 type.go:168] "Request Body" body=""
	I1212 00:19:08.786248   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:08.786551   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:09.286220   48339 type.go:168] "Request Body" body=""
	I1212 00:19:09.286299   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:09.286639   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:09.786169   48339 type.go:168] "Request Body" body=""
	I1212 00:19:09.786240   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:09.786540   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:10.286846   48339 type.go:168] "Request Body" body=""
	I1212 00:19:10.286915   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:10.287189   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:10.287228   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:10.787020   48339 type.go:168] "Request Body" body=""
	I1212 00:19:10.787096   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:10.787416   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:11.286118   48339 type.go:168] "Request Body" body=""
	I1212 00:19:11.286193   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:11.286517   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:11.786150   48339 type.go:168] "Request Body" body=""
	I1212 00:19:11.786231   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:11.786516   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:12.286197   48339 type.go:168] "Request Body" body=""
	I1212 00:19:12.286271   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:12.286598   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:12.786359   48339 type.go:168] "Request Body" body=""
	I1212 00:19:12.786434   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:12.786739   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:12.786787   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:13.286561   48339 type.go:168] "Request Body" body=""
	I1212 00:19:13.286637   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:13.286885   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:13.786215   48339 type.go:168] "Request Body" body=""
	I1212 00:19:13.786291   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:13.786637   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:14.286215   48339 type.go:168] "Request Body" body=""
	I1212 00:19:14.286287   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:14.286589   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:14.786851   48339 type.go:168] "Request Body" body=""
	I1212 00:19:14.786918   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:14.787262   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:14.787320   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:15.287090   48339 type.go:168] "Request Body" body=""
	I1212 00:19:15.287165   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:15.287490   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:15.786172   48339 type.go:168] "Request Body" body=""
	I1212 00:19:15.786269   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:15.786579   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:16.286136   48339 type.go:168] "Request Body" body=""
	I1212 00:19:16.286210   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:16.286453   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:16.786222   48339 type.go:168] "Request Body" body=""
	I1212 00:19:16.786299   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:16.786659   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:17.286375   48339 type.go:168] "Request Body" body=""
	I1212 00:19:17.286453   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:17.286795   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:17.286857   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:17.786163   48339 type.go:168] "Request Body" body=""
	I1212 00:19:17.786237   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:17.786560   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:18.286451   48339 type.go:168] "Request Body" body=""
	I1212 00:19:18.286531   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:18.286856   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:18.786177   48339 type.go:168] "Request Body" body=""
	I1212 00:19:18.786251   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:18.786557   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:19.286160   48339 type.go:168] "Request Body" body=""
	I1212 00:19:19.286232   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:19.286485   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:19.786192   48339 type.go:168] "Request Body" body=""
	I1212 00:19:19.786263   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:19.786567   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:19.786614   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:20.286287   48339 type.go:168] "Request Body" body=""
	I1212 00:19:20.286370   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:20.286718   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:20.787029   48339 type.go:168] "Request Body" body=""
	I1212 00:19:20.787097   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:20.787342   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:21.287119   48339 type.go:168] "Request Body" body=""
	I1212 00:19:21.287198   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:21.287505   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:21.786177   48339 type.go:168] "Request Body" body=""
	I1212 00:19:21.786266   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:21.786579   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:22.287046   48339 type.go:168] "Request Body" body=""
	I1212 00:19:22.287111   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:22.287377   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:22.287420   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:22.786275   48339 type.go:168] "Request Body" body=""
	I1212 00:19:22.786343   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:22.786646   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:23.286598   48339 type.go:168] "Request Body" body=""
	I1212 00:19:23.286692   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:23.287042   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:23.786834   48339 type.go:168] "Request Body" body=""
	I1212 00:19:23.786913   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:23.787199   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:24.286916   48339 type.go:168] "Request Body" body=""
	I1212 00:19:24.287018   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:24.287331   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:24.787102   48339 type.go:168] "Request Body" body=""
	I1212 00:19:24.787174   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:24.787525   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:24.787578   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:25.286170   48339 type.go:168] "Request Body" body=""
	I1212 00:19:25.286246   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:25.286510   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:25.787014   48339 type.go:168] "Request Body" body=""
	I1212 00:19:25.787086   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:25.787411   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:26.286133   48339 type.go:168] "Request Body" body=""
	I1212 00:19:26.286205   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:26.286533   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:26.786125   48339 type.go:168] "Request Body" body=""
	I1212 00:19:26.786195   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:26.786499   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:27.286222   48339 type.go:168] "Request Body" body=""
	I1212 00:19:27.286294   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:27.286619   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:27.286677   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:27.786180   48339 type.go:168] "Request Body" body=""
	I1212 00:19:27.786252   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:27.786586   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:28.286372   48339 type.go:168] "Request Body" body=""
	I1212 00:19:28.286448   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:28.286700   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:28.786195   48339 type.go:168] "Request Body" body=""
	I1212 00:19:28.786271   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:28.786605   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:29.286191   48339 type.go:168] "Request Body" body=""
	I1212 00:19:29.286267   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:29.286615   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:29.786910   48339 type.go:168] "Request Body" body=""
	I1212 00:19:29.786981   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:29.787247   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:29.787287   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:30.287073   48339 type.go:168] "Request Body" body=""
	I1212 00:19:30.287154   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:30.287499   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:30.786184   48339 type.go:168] "Request Body" body=""
	I1212 00:19:30.786259   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:30.786602   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:31.286871   48339 type.go:168] "Request Body" body=""
	I1212 00:19:31.286942   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:31.287207   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:31.786942   48339 type.go:168] "Request Body" body=""
	I1212 00:19:31.787038   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:31.787334   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:31.787377   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:32.287019   48339 type.go:168] "Request Body" body=""
	I1212 00:19:32.287094   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:32.287431   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:32.786241   48339 type.go:168] "Request Body" body=""
	I1212 00:19:32.786308   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:32.786562   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:33.286586   48339 type.go:168] "Request Body" body=""
	I1212 00:19:33.286669   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:33.287081   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:33.786842   48339 type.go:168] "Request Body" body=""
	I1212 00:19:33.786915   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:33.787232   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:34.286965   48339 type.go:168] "Request Body" body=""
	I1212 00:19:34.287052   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:34.287321   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:34.287371   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:34.787103   48339 type.go:168] "Request Body" body=""
	I1212 00:19:34.787184   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:34.787507   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:35.286199   48339 type.go:168] "Request Body" body=""
	I1212 00:19:35.286275   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:35.286623   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:35.786303   48339 type.go:168] "Request Body" body=""
	I1212 00:19:35.786378   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:35.786633   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:36.286201   48339 type.go:168] "Request Body" body=""
	I1212 00:19:36.286276   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:36.286623   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:36.786173   48339 type.go:168] "Request Body" body=""
	I1212 00:19:36.786245   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:36.786551   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:36.786609   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:37.286157   48339 type.go:168] "Request Body" body=""
	I1212 00:19:37.286229   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:37.286482   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:37.786158   48339 type.go:168] "Request Body" body=""
	I1212 00:19:37.786237   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:37.786552   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:38.286494   48339 type.go:168] "Request Body" body=""
	I1212 00:19:38.286574   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:38.286901   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:38.786471   48339 type.go:168] "Request Body" body=""
	I1212 00:19:38.786543   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:38.786828   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:38.786871   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:39.286234   48339 type.go:168] "Request Body" body=""
	I1212 00:19:39.286306   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:39.286633   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:39.786191   48339 type.go:168] "Request Body" body=""
	I1212 00:19:39.786273   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:39.786591   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:40.286174   48339 type.go:168] "Request Body" body=""
	I1212 00:19:40.286246   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:40.286501   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:40.786212   48339 type.go:168] "Request Body" body=""
	I1212 00:19:40.786284   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:40.786618   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:41.286308   48339 type.go:168] "Request Body" body=""
	I1212 00:19:41.286385   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:41.286717   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:41.286778   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:41.786259   48339 type.go:168] "Request Body" body=""
	I1212 00:19:41.786336   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:41.786616   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:42.286266   48339 type.go:168] "Request Body" body=""
	I1212 00:19:42.286426   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:42.286836   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:42.786557   48339 type.go:168] "Request Body" body=""
	I1212 00:19:42.786636   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:42.786968   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:43.286831   48339 type.go:168] "Request Body" body=""
	I1212 00:19:43.286907   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:43.287195   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:43.287247   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:43.786980   48339 type.go:168] "Request Body" body=""
	I1212 00:19:43.787071   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:43.787383   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:44.286097   48339 type.go:168] "Request Body" body=""
	I1212 00:19:44.286182   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:44.286516   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:44.787095   48339 type.go:168] "Request Body" body=""
	I1212 00:19:44.787170   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:44.787420   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:45.286197   48339 type.go:168] "Request Body" body=""
	I1212 00:19:45.286315   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:45.286686   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:45.786212   48339 type.go:168] "Request Body" body=""
	I1212 00:19:45.786292   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:45.786616   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:45.786667   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:46.286316   48339 type.go:168] "Request Body" body=""
	I1212 00:19:46.286391   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:46.286672   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:46.786182   48339 type.go:168] "Request Body" body=""
	I1212 00:19:46.786255   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:46.786571   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:47.286220   48339 type.go:168] "Request Body" body=""
	I1212 00:19:47.286293   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:47.286639   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:47.787075   48339 type.go:168] "Request Body" body=""
	I1212 00:19:47.787141   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:47.787388   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:47.787425   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:48.286333   48339 type.go:168] "Request Body" body=""
	I1212 00:19:48.286406   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:48.286742   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:48.786260   48339 type.go:168] "Request Body" body=""
	I1212 00:19:48.786335   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:48.786670   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:49.286373   48339 type.go:168] "Request Body" body=""
	I1212 00:19:49.286448   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:49.286721   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:49.786393   48339 type.go:168] "Request Body" body=""
	I1212 00:19:49.786466   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:49.786793   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:50.286556   48339 type.go:168] "Request Body" body=""
	I1212 00:19:50.286645   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:50.286977   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:50.287046   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:50.786244   48339 type.go:168] "Request Body" body=""
	I1212 00:19:50.786323   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:50.786639   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:51.286207   48339 type.go:168] "Request Body" body=""
	I1212 00:19:51.286281   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:51.286646   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:51.786236   48339 type.go:168] "Request Body" body=""
	I1212 00:19:51.786326   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:51.786698   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:52.286383   48339 type.go:168] "Request Body" body=""
	I1212 00:19:52.286453   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:52.286705   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:52.786430   48339 type.go:168] "Request Body" body=""
	I1212 00:19:52.786502   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:52.786808   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:52.786864   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.286726   48339 type.go:168] "Request Body" body=""
	I1212 00:19:53.286799   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:53.287127   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:53.786892   48339 type.go:168] "Request Body" body=""
	I1212 00:19:53.786963   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:53.787281   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:54.287089   48339 type.go:168] "Request Body" body=""
	I1212 00:19:54.287161   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:54.287510   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:54.787071   48339 type.go:168] "Request Body" body=""
	I1212 00:19:54.787148   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:54.787473   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:54.787523   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:55.287042   48339 type.go:168] "Request Body" body=""
	I1212 00:19:55.287120   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:55.287397   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:55.786097   48339 type.go:168] "Request Body" body=""
	I1212 00:19:55.786167   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:55.786471   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:56.286180   48339 type.go:168] "Request Body" body=""
	I1212 00:19:56.286255   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:56.286560   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:56.786731   48339 type.go:168] "Request Body" body=""
	I1212 00:19:56.786834   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:56.787097   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:57.286916   48339 type.go:168] "Request Body" body=""
	I1212 00:19:57.287011   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:57.287338   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:57.287392   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:57.787113   48339 type.go:168] "Request Body" body=""
	I1212 00:19:57.787195   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:57.787542   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:58.286383   48339 type.go:168] "Request Body" body=""
	I1212 00:19:58.286455   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:58.286708   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:58.786168   48339 type.go:168] "Request Body" body=""
	I1212 00:19:58.786245   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:58.786576   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:59.286179   48339 type.go:168] "Request Body" body=""
	I1212 00:19:59.286256   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:59.286592   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:59.786275   48339 type.go:168] "Request Body" body=""
	I1212 00:19:59.786344   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:59.786595   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:59.786633   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:00.286342   48339 type.go:168] "Request Body" body=""
	I1212 00:20:00.286436   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:00.286738   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:00.786599   48339 type.go:168] "Request Body" body=""
	I1212 00:20:00.786680   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:00.787175   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:01.286983   48339 type.go:168] "Request Body" body=""
	I1212 00:20:01.287070   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:01.287375   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:01.787109   48339 type.go:168] "Request Body" body=""
	I1212 00:20:01.787182   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:01.787524   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:01.787578   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:02.286136   48339 type.go:168] "Request Body" body=""
	I1212 00:20:02.286214   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:02.286541   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:02.786447   48339 type.go:168] "Request Body" body=""
	I1212 00:20:02.786522   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:02.786791   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:03.286727   48339 type.go:168] "Request Body" body=""
	I1212 00:20:03.286808   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:03.287147   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:03.786954   48339 type.go:168] "Request Body" body=""
	I1212 00:20:03.787051   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:03.787411   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:04.287105   48339 type.go:168] "Request Body" body=""
	I1212 00:20:04.287184   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:04.287440   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:04.287480   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:04.786199   48339 type.go:168] "Request Body" body=""
	I1212 00:20:04.786275   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:04.786621   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:05.286300   48339 type.go:168] "Request Body" body=""
	I1212 00:20:05.286378   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:05.286699   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:05.786189   48339 type.go:168] "Request Body" body=""
	I1212 00:20:05.786286   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:05.786574   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:06.286216   48339 type.go:168] "Request Body" body=""
	I1212 00:20:06.286291   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:06.286653   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:06.786351   48339 type.go:168] "Request Body" body=""
	I1212 00:20:06.786425   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:06.786777   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:06.786833   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:07.286483   48339 type.go:168] "Request Body" body=""
	I1212 00:20:07.286562   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:07.286815   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:07.786485   48339 type.go:168] "Request Body" body=""
	I1212 00:20:07.786559   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:07.786920   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:08.286761   48339 type.go:168] "Request Body" body=""
	I1212 00:20:08.286836   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:08.287188   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:08.786941   48339 type.go:168] "Request Body" body=""
	I1212 00:20:08.787029   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:08.787324   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:08.787386   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:09.287127   48339 type.go:168] "Request Body" body=""
	I1212 00:20:09.287201   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:09.287579   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:09.786165   48339 type.go:168] "Request Body" body=""
	I1212 00:20:09.786264   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:09.786669   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:10.286348   48339 type.go:168] "Request Body" body=""
	I1212 00:20:10.286420   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:10.286711   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:10.786398   48339 type.go:168] "Request Body" body=""
	I1212 00:20:10.786476   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:10.786785   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:11.286171   48339 type.go:168] "Request Body" body=""
	I1212 00:20:11.286251   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:11.286562   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:11.286616   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:11.786160   48339 type.go:168] "Request Body" body=""
	I1212 00:20:11.786237   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:11.786560   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:12.286232   48339 type.go:168] "Request Body" body=""
	I1212 00:20:12.286313   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:12.286631   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:12.786525   48339 type.go:168] "Request Body" body=""
	I1212 00:20:12.786596   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:12.786927   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:13.286693   48339 type.go:168] "Request Body" body=""
	I1212 00:20:13.286759   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:13.287036   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:13.287076   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:13.786823   48339 type.go:168] "Request Body" body=""
	I1212 00:20:13.786903   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:13.787250   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:14.287106   48339 type.go:168] "Request Body" body=""
	I1212 00:20:14.287193   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:14.287515   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:14.786201   48339 type.go:168] "Request Body" body=""
	I1212 00:20:14.786277   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:14.786598   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:15.286226   48339 type.go:168] "Request Body" body=""
	I1212 00:20:15.286303   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:15.286675   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:15.786377   48339 type.go:168] "Request Body" body=""
	I1212 00:20:15.786454   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:15.786771   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:15.786825   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:16.286146   48339 type.go:168] "Request Body" body=""
	I1212 00:20:16.286230   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:16.286475   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:16.786183   48339 type.go:168] "Request Body" body=""
	I1212 00:20:16.786256   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:16.786581   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:17.286264   48339 type.go:168] "Request Body" body=""
	I1212 00:20:17.286366   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:17.286686   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:17.786376   48339 type.go:168] "Request Body" body=""
	I1212 00:20:17.786459   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:17.786714   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:18.286808   48339 type.go:168] "Request Body" body=""
	I1212 00:20:18.286881   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:18.287211   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:18.287257   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:18.787025   48339 type.go:168] "Request Body" body=""
	I1212 00:20:18.787098   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:18.787407   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:19.286245   48339 type.go:168] "Request Body" body=""
	I1212 00:20:19.286455   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:19.287173   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:19.786138   48339 type.go:168] "Request Body" body=""
	I1212 00:20:19.786225   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:19.786578   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:20.286250   48339 type.go:168] "Request Body" body=""
	I1212 00:20:20.286325   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:20.286634   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:20.786202   48339 type.go:168] "Request Body" body=""
	I1212 00:20:20.786280   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:20.786538   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:20.786583   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:21.286166   48339 type.go:168] "Request Body" body=""
	I1212 00:20:21.286242   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:21.286538   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:21.786130   48339 type.go:168] "Request Body" body=""
	I1212 00:20:21.786205   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:21.786517   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:22.287092   48339 type.go:168] "Request Body" body=""
	I1212 00:20:22.287164   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:22.287411   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:22.786375   48339 type.go:168] "Request Body" body=""
	I1212 00:20:22.786456   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:22.786778   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:22.786829   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:23.286658   48339 type.go:168] "Request Body" body=""
	I1212 00:20:23.286731   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:23.287085   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:23.786836   48339 type.go:168] "Request Body" body=""
	I1212 00:20:23.786908   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:23.787187   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:24.286964   48339 type.go:168] "Request Body" body=""
	I1212 00:20:24.287062   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:24.287428   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:24.786115   48339 type.go:168] "Request Body" body=""
	I1212 00:20:24.786188   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:24.786524   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:25.286237   48339 type.go:168] "Request Body" body=""
	I1212 00:20:25.286345   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:25.286768   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:25.286850   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:25.786511   48339 type.go:168] "Request Body" body=""
	I1212 00:20:25.786607   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:25.786978   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:26.286790   48339 type.go:168] "Request Body" body=""
	I1212 00:20:26.286875   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:26.287221   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:26.786820   48339 type.go:168] "Request Body" body=""
	I1212 00:20:26.786891   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:26.787243   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:27.287025   48339 type.go:168] "Request Body" body=""
	I1212 00:20:27.287103   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:27.287476   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:27.287533   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:27.786181   48339 type.go:168] "Request Body" body=""
	I1212 00:20:27.786256   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:27.786579   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:28.286328   48339 type.go:168] "Request Body" body=""
	I1212 00:20:28.286403   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:28.286680   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:28.786377   48339 type.go:168] "Request Body" body=""
	I1212 00:20:28.786452   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:28.786763   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:29.286254   48339 type.go:168] "Request Body" body=""
	I1212 00:20:29.286331   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:29.286614   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:29.787081   48339 type.go:168] "Request Body" body=""
	I1212 00:20:29.787157   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:29.787430   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:29.787484   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:30.286195   48339 type.go:168] "Request Body" body=""
	I1212 00:20:30.286367   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:30.286726   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:30.786404   48339 type.go:168] "Request Body" body=""
	I1212 00:20:30.786481   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:30.786819   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:31.286538   48339 type.go:168] "Request Body" body=""
	I1212 00:20:31.286615   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:31.286953   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:31.786734   48339 type.go:168] "Request Body" body=""
	I1212 00:20:31.786823   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:31.787169   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:32.286853   48339 type.go:168] "Request Body" body=""
	I1212 00:20:32.286946   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:32.287277   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:32.287336   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:32.786355   48339 type.go:168] "Request Body" body=""
	I1212 00:20:32.786440   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:32.786710   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:33.286694   48339 type.go:168] "Request Body" body=""
	I1212 00:20:33.286774   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:33.287132   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:33.786905   48339 type.go:168] "Request Body" body=""
	I1212 00:20:33.786983   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:33.787332   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:34.287037   48339 type.go:168] "Request Body" body=""
	I1212 00:20:34.287105   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:34.287355   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:34.287394   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:34.787091   48339 type.go:168] "Request Body" body=""
	I1212 00:20:34.787167   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:34.787475   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:35.286183   48339 type.go:168] "Request Body" body=""
	I1212 00:20:35.286264   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:35.286585   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:35.786156   48339 type.go:168] "Request Body" body=""
	I1212 00:20:35.786225   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:35.786538   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:36.286227   48339 type.go:168] "Request Body" body=""
	I1212 00:20:36.286330   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:36.286653   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:36.786356   48339 type.go:168] "Request Body" body=""
	I1212 00:20:36.786434   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:36.786764   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:36.786818   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:37.286091   48339 type.go:168] "Request Body" body=""
	I1212 00:20:37.286166   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:37.286500   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:37.786184   48339 type.go:168] "Request Body" body=""
	I1212 00:20:37.786263   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:37.786572   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:38.286481   48339 type.go:168] "Request Body" body=""
	I1212 00:20:38.286552   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:38.286881   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:38.786443   48339 type.go:168] "Request Body" body=""
	I1212 00:20:38.786517   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:38.786773   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:39.286212   48339 type.go:168] "Request Body" body=""
	I1212 00:20:39.286290   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:39.286616   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:39.286667   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:39.786165   48339 type.go:168] "Request Body" body=""
	I1212 00:20:39.786242   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:39.786530   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:40.286181   48339 type.go:168] "Request Body" body=""
	I1212 00:20:40.286252   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:40.286503   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:40.786171   48339 type.go:168] "Request Body" body=""
	I1212 00:20:40.786243   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:40.786563   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:41.286127   48339 type.go:168] "Request Body" body=""
	I1212 00:20:41.286208   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:41.286529   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:41.787086   48339 type.go:168] "Request Body" body=""
	I1212 00:20:41.787155   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:41.787421   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:41.787466   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:42.286149   48339 type.go:168] "Request Body" body=""
	I1212 00:20:42.286244   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:42.286590   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:42.786361   48339 type.go:168] "Request Body" body=""
	I1212 00:20:42.786438   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:42.786779   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:43.286633   48339 type.go:168] "Request Body" body=""
	I1212 00:20:43.286702   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:43.286960   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:43.786722   48339 type.go:168] "Request Body" body=""
	I1212 00:20:43.786804   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:43.787206   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:44.286934   48339 type.go:168] "Request Body" body=""
	I1212 00:20:44.287023   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:44.287351   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:44.287409   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:44.786842   48339 type.go:168] "Request Body" body=""
	I1212 00:20:44.786917   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:44.787191   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:45.286977   48339 type.go:168] "Request Body" body=""
	I1212 00:20:45.287067   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:45.287390   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:45.787177   48339 type.go:168] "Request Body" body=""
	I1212 00:20:45.787257   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:45.787616   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:46.286986   48339 type.go:168] "Request Body" body=""
	I1212 00:20:46.287083   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:46.287348   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:46.787132   48339 type.go:168] "Request Body" body=""
	I1212 00:20:46.787205   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:46.787529   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:46.787585   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:47.286216   48339 type.go:168] "Request Body" body=""
	I1212 00:20:47.286289   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:47.286635   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:47.787077   48339 type.go:168] "Request Body" body=""
	I1212 00:20:47.787165   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:47.787464   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:48.286384   48339 type.go:168] "Request Body" body=""
	I1212 00:20:48.286461   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:48.286804   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:48.786191   48339 type.go:168] "Request Body" body=""
	I1212 00:20:48.786264   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:48.786586   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:49.286166   48339 type.go:168] "Request Body" body=""
	I1212 00:20:49.286240   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:49.286495   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:49.286545   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:49.786168   48339 type.go:168] "Request Body" body=""
	I1212 00:20:49.786246   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:49.786526   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:50.286237   48339 type.go:168] "Request Body" body=""
	I1212 00:20:50.286315   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:50.286678   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:50.786121   48339 type.go:168] "Request Body" body=""
	I1212 00:20:50.786187   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:50.786438   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:51.286121   48339 type.go:168] "Request Body" body=""
	I1212 00:20:51.286198   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:51.286527   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:51.286572   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:51.786155   48339 type.go:168] "Request Body" body=""
	I1212 00:20:51.786230   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:51.786579   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:52.286140   48339 type.go:168] "Request Body" body=""
	I1212 00:20:52.286212   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:52.286463   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:52.786341   48339 type.go:168] "Request Body" body=""
	I1212 00:20:52.786421   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:52.786710   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:53.286513   48339 type.go:168] "Request Body" body=""
	I1212 00:20:53.286636   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:53.286976   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:53.287052   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:53.786687   48339 type.go:168] "Request Body" body=""
	I1212 00:20:53.786760   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:53.787036   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:54.286863   48339 type.go:168] "Request Body" body=""
	I1212 00:20:54.286939   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:54.287249   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:54.787061   48339 type.go:168] "Request Body" body=""
	I1212 00:20:54.787143   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:54.787476   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:55.286970   48339 type.go:168] "Request Body" body=""
	I1212 00:20:55.287058   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:55.287308   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:55.287347   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:55.786924   48339 type.go:168] "Request Body" body=""
	I1212 00:20:55.787017   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:55.787330   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:56.287109   48339 type.go:168] "Request Body" body=""
	I1212 00:20:56.287182   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:56.287490   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:56.786897   48339 type.go:168] "Request Body" body=""
	I1212 00:20:56.786972   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:56.787241   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:57.287067   48339 type.go:168] "Request Body" body=""
	I1212 00:20:57.287145   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:57.287509   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:57.287566   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:57.786231   48339 type.go:168] "Request Body" body=""
	I1212 00:20:57.786303   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:57.786626   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:58.286503   48339 type.go:168] "Request Body" body=""
	I1212 00:20:58.286567   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:58.286819   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:58.786175   48339 type.go:168] "Request Body" body=""
	I1212 00:20:58.786249   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:58.786577   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:59.286221   48339 type.go:168] "Request Body" body=""
	I1212 00:20:59.286300   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:59.286643   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:59.786201   48339 type.go:168] "Request Body" body=""
	I1212 00:20:59.786272   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:59.786717   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:59.786766   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:21:00.286416   48339 type.go:168] "Request Body" body=""
	I1212 00:21:00.286498   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:00.286792   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:00.786190   48339 type.go:168] "Request Body" body=""
	I1212 00:21:00.786269   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:00.786582   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:01.286121   48339 type.go:168] "Request Body" body=""
	I1212 00:21:01.286194   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:01.286449   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:01.786199   48339 type.go:168] "Request Body" body=""
	I1212 00:21:01.786294   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:01.786641   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:02.286227   48339 type.go:168] "Request Body" body=""
	I1212 00:21:02.286306   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:02.286637   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:21:02.286688   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:21:02.786377   48339 type.go:168] "Request Body" body=""
	I1212 00:21:02.786458   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:02.786789   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:03.286595   48339 type.go:168] "Request Body" body=""
	I1212 00:21:03.286680   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:03.287072   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:03.786847   48339 type.go:168] "Request Body" body=""
	I1212 00:21:03.786925   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:03.787257   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:04.287036   48339 type.go:168] "Request Body" body=""
	I1212 00:21:04.287108   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:04.287431   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:21:04.287477   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:21:04.786103   48339 type.go:168] "Request Body" body=""
	I1212 00:21:04.786178   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:04.786510   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:05.286213   48339 type.go:168] "Request Body" body=""
	I1212 00:21:05.286293   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:05.286653   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:05.786170   48339 type.go:168] "Request Body" body=""
	I1212 00:21:05.786239   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:05.786497   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:06.286230   48339 type.go:168] "Request Body" body=""
	I1212 00:21:06.286305   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:06.286647   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:06.786360   48339 type.go:168] "Request Body" body=""
	I1212 00:21:06.786435   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:06.786771   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:21:06.786825   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:21:07.286459   48339 type.go:168] "Request Body" body=""
	I1212 00:21:07.286536   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:07.286784   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:07.786179   48339 type.go:168] "Request Body" body=""
	I1212 00:21:07.786260   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:07.786613   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:08.286429   48339 type.go:168] "Request Body" body=""
	I1212 00:21:08.286512   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:08.286882   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:08.786159   48339 type.go:168] "Request Body" body=""
	I1212 00:21:08.786230   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:08.791780   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	W1212 00:21:08.791841   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:21:09.286491   48339 type.go:168] "Request Body" body=""
	I1212 00:21:09.286564   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:09.286869   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:09.786199   48339 type.go:168] "Request Body" body=""
	I1212 00:21:09.786280   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:09.786589   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:10.286143   48339 type.go:168] "Request Body" body=""
	I1212 00:21:10.286219   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:10.286481   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:10.786184   48339 type.go:168] "Request Body" body=""
	I1212 00:21:10.786253   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:10.786584   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:11.286268   48339 type.go:168] "Request Body" body=""
	I1212 00:21:11.286353   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:11.286684   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:21:11.286736   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:21:11.786169   48339 type.go:168] "Request Body" body=""
	I1212 00:21:11.786241   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:11.786538   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:12.286254   48339 type.go:168] "Request Body" body=""
	I1212 00:21:12.286329   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:12.286629   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:12.786499   48339 type.go:168] "Request Body" body=""
	I1212 00:21:12.786576   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:12.786914   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:13.286651   48339 type.go:168] "Request Body" body=""
	I1212 00:21:13.286728   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:13.286985   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:21:13.287050   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:21:13.786749   48339 type.go:168] "Request Body" body=""
	I1212 00:21:13.786806   48339 node_ready.go:38] duration metric: took 6m0.00081197s for node "functional-767012" to be "Ready" ...
	I1212 00:21:13.789905   48339 out.go:203] 
	W1212 00:21:13.792750   48339 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1212 00:21:13.792769   48339 out.go:285] * 
	* 
	W1212 00:21:13.794879   48339 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:21:13.797575   48339 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-arm64 start -p functional-767012 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m6.258953225s for "functional-767012" cluster.
I1212 00:21:14.402467    4290 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-767012
helpers_test.go:244: (dbg) docker inspect functional-767012:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e",
	        "Created": "2025-12-12T00:06:52.261765556Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42951,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T00:06:52.317917194Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/hostname",
	        "HostsPath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/hosts",
	        "LogPath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e-json.log",
	        "Name": "/functional-767012",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-767012:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-767012",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e",
	                "LowerDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70-init/diff:/var/lib/docker/overlay2/4c0e5370e4fd7b4e6c6a79620ef377d7d55826709cd277e0cfa49c6005af0314/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-767012",
	                "Source": "/var/lib/docker/volumes/functional-767012/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-767012",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-767012",
	                "name.minikube.sigs.k8s.io": "functional-767012",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e781257da3adf1d3284ab2a6de0168c3db7957f25a7e53d0015250294193762d",
	            "SandboxKey": "/var/run/docker/netns/e781257da3ad",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-767012": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:4d:78:ba:7d:83",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "83467cc4cb13818b98ec0d7cb5fc0064ea6eb2c8db4256a8a81330921aa2d9a4",
	                    "EndpointID": "b787b732d8d748776ceeb6e65fab51cc1e79758446bc85ac20043b35593fab12",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-767012",
	                        "6585a82fe5e6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-767012 -n functional-767012
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-767012 -n functional-767012: exit status 2 (416.306058ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-095481 ssh sudo cat /etc/ssl/certs/42902.pem                                                                                                         │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ ssh            │ functional-095481 ssh sudo cat /usr/share/ca-certificates/42902.pem                                                                                             │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image          │ functional-095481 image ls                                                                                                                                      │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ ssh            │ functional-095481 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image          │ functional-095481 image load --daemon kicbase/echo-server:functional-095481 --alsologtostderr                                                                   │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image          │ functional-095481 image ls                                                                                                                                      │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image          │ functional-095481 image save kicbase/echo-server:functional-095481 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ update-context │ functional-095481 update-context --alsologtostderr -v=2                                                                                                         │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image          │ functional-095481 image rm kicbase/echo-server:functional-095481 --alsologtostderr                                                                              │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ update-context │ functional-095481 update-context --alsologtostderr -v=2                                                                                                         │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ update-context │ functional-095481 update-context --alsologtostderr -v=2                                                                                                         │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image          │ functional-095481 image ls                                                                                                                                      │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image          │ functional-095481 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image          │ functional-095481 image ls                                                                                                                                      │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image          │ functional-095481 image save --daemon kicbase/echo-server:functional-095481 --alsologtostderr                                                                   │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image          │ functional-095481 image ls --format yaml --alsologtostderr                                                                                                      │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image          │ functional-095481 image ls --format short --alsologtostderr                                                                                                     │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image          │ functional-095481 image ls --format table --alsologtostderr                                                                                                     │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image          │ functional-095481 image ls --format json --alsologtostderr                                                                                                      │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ ssh            │ functional-095481 ssh pgrep buildkitd                                                                                                                           │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │                     │
	│ image          │ functional-095481 image build -t localhost/my-image:functional-095481 testdata/build --alsologtostderr                                                          │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image          │ functional-095481 image ls                                                                                                                                      │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ delete         │ -p functional-095481                                                                                                                                            │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ start          │ -p functional-767012 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0         │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │                     │
	│ start          │ -p functional-767012 --alsologtostderr -v=8                                                                                                                     │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:15 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:15:08
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:15:08.188216   48339 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:15:08.188435   48339 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:15:08.188463   48339 out.go:374] Setting ErrFile to fd 2...
	I1212 00:15:08.188485   48339 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:15:08.188893   48339 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	I1212 00:15:08.189436   48339 out.go:368] Setting JSON to false
	I1212 00:15:08.190327   48339 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3455,"bootTime":1765495054,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1212 00:15:08.190468   48339 start.go:143] virtualization:  
	I1212 00:15:08.194075   48339 out.go:179] * [functional-767012] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 00:15:08.197745   48339 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:15:08.197889   48339 notify.go:221] Checking for updates...
	I1212 00:15:08.203623   48339 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:15:08.206559   48339 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 00:15:08.209313   48339 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	I1212 00:15:08.212202   48339 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:15:08.215231   48339 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:15:08.218454   48339 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 00:15:08.218601   48339 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:15:08.244528   48339 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 00:15:08.244655   48339 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:15:08.299617   48339 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 00:15:08.290252755 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:15:08.299730   48339 docker.go:319] overlay module found
	I1212 00:15:08.302863   48339 out.go:179] * Using the docker driver based on existing profile
	I1212 00:15:08.305730   48339 start.go:309] selected driver: docker
	I1212 00:15:08.305754   48339 start.go:927] validating driver "docker" against &{Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:15:08.305854   48339 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:15:08.305953   48339 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:15:08.359436   48339 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 00:15:08.349975764 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:15:08.359860   48339 cni.go:84] Creating CNI manager for ""
	I1212 00:15:08.359920   48339 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 00:15:08.359966   48339 start.go:353] cluster config:
	{Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:15:08.363136   48339 out.go:179] * Starting "functional-767012" primary control-plane node in "functional-767012" cluster
	I1212 00:15:08.365917   48339 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1212 00:15:08.368829   48339 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 00:15:08.371809   48339 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 00:15:08.371858   48339 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1212 00:15:08.371872   48339 cache.go:65] Caching tarball of preloaded images
	I1212 00:15:08.371970   48339 preload.go:238] Found /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1212 00:15:08.371992   48339 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1212 00:15:08.372099   48339 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/config.json ...
	I1212 00:15:08.372328   48339 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 00:15:08.391509   48339 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 00:15:08.391533   48339 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 00:15:08.391552   48339 cache.go:243] Successfully downloaded all kic artifacts
	I1212 00:15:08.391583   48339 start.go:360] acquireMachinesLock for functional-767012: {Name:mk41cf89e93a3830367886ebbef2bb8f6e99e3f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:15:08.391643   48339 start.go:364] duration metric: took 36.464µs to acquireMachinesLock for "functional-767012"
	I1212 00:15:08.391666   48339 start.go:96] Skipping create...Using existing machine configuration
	I1212 00:15:08.391675   48339 fix.go:54] fixHost starting: 
	I1212 00:15:08.391939   48339 cli_runner.go:164] Run: docker container inspect functional-767012 --format={{.State.Status}}
	I1212 00:15:08.408717   48339 fix.go:112] recreateIfNeeded on functional-767012: state=Running err=<nil>
	W1212 00:15:08.408748   48339 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 00:15:08.411849   48339 out.go:252] * Updating the running docker "functional-767012" container ...
	I1212 00:15:08.411881   48339 machine.go:94] provisionDockerMachine start ...
	I1212 00:15:08.411961   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:08.429482   48339 main.go:143] libmachine: Using SSH client type: native
	I1212 00:15:08.429817   48339 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 00:15:08.429834   48339 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 00:15:08.578648   48339 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-767012
	
	I1212 00:15:08.578671   48339 ubuntu.go:182] provisioning hostname "functional-767012"
	I1212 00:15:08.578741   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:08.596871   48339 main.go:143] libmachine: Using SSH client type: native
	I1212 00:15:08.597187   48339 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 00:15:08.597227   48339 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-767012 && echo "functional-767012" | sudo tee /etc/hostname
	I1212 00:15:08.759668   48339 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-767012
	
	I1212 00:15:08.759746   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:08.776780   48339 main.go:143] libmachine: Using SSH client type: native
	I1212 00:15:08.777096   48339 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 00:15:08.777119   48339 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-767012' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-767012/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-767012' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:15:08.931523   48339 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:15:08.931550   48339 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-2343/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-2343/.minikube}
	I1212 00:15:08.931582   48339 ubuntu.go:190] setting up certificates
	I1212 00:15:08.931592   48339 provision.go:84] configureAuth start
	I1212 00:15:08.931653   48339 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-767012
	I1212 00:15:08.952406   48339 provision.go:143] copyHostCerts
	I1212 00:15:08.952454   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem
	I1212 00:15:08.952497   48339 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem, removing ...
	I1212 00:15:08.952507   48339 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem
	I1212 00:15:08.952585   48339 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem (1675 bytes)
	I1212 00:15:08.952685   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem
	I1212 00:15:08.952707   48339 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem, removing ...
	I1212 00:15:08.952712   48339 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem
	I1212 00:15:08.952745   48339 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem (1082 bytes)
	I1212 00:15:08.952800   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem
	I1212 00:15:08.952821   48339 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem, removing ...
	I1212 00:15:08.952828   48339 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem
	I1212 00:15:08.952852   48339 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem (1123 bytes)
	I1212 00:15:08.952913   48339 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem org=jenkins.functional-767012 san=[127.0.0.1 192.168.49.2 functional-767012 localhost minikube]
	I1212 00:15:09.089842   48339 provision.go:177] copyRemoteCerts
	I1212 00:15:09.089908   48339 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:15:09.089956   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:09.108065   48339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:15:09.210645   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 00:15:09.210700   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 00:15:09.228116   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 00:15:09.228176   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 00:15:09.245824   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 00:15:09.245889   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:15:09.263086   48339 provision.go:87] duration metric: took 331.470752ms to configureAuth
	I1212 00:15:09.263116   48339 ubuntu.go:206] setting minikube options for container-runtime
	I1212 00:15:09.263293   48339 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 00:15:09.263306   48339 machine.go:97] duration metric: took 851.418761ms to provisionDockerMachine
	I1212 00:15:09.263315   48339 start.go:293] postStartSetup for "functional-767012" (driver="docker")
	I1212 00:15:09.263326   48339 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:15:09.263390   48339 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:15:09.263439   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:09.281753   48339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:15:09.386868   48339 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:15:09.390421   48339 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1212 00:15:09.390442   48339 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1212 00:15:09.390447   48339 command_runner.go:130] > VERSION_ID="12"
	I1212 00:15:09.390451   48339 command_runner.go:130] > VERSION="12 (bookworm)"
	I1212 00:15:09.390456   48339 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1212 00:15:09.390460   48339 command_runner.go:130] > ID=debian
	I1212 00:15:09.390464   48339 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1212 00:15:09.390469   48339 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1212 00:15:09.390475   48339 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1212 00:15:09.390546   48339 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:15:09.390568   48339 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 00:15:09.390580   48339 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/addons for local assets ...
	I1212 00:15:09.390640   48339 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/files for local assets ...
	I1212 00:15:09.390732   48339 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem -> 42902.pem in /etc/ssl/certs
	I1212 00:15:09.390742   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem -> /etc/ssl/certs/42902.pem
	I1212 00:15:09.390816   48339 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/test/nested/copy/4290/hosts -> hosts in /etc/test/nested/copy/4290
	I1212 00:15:09.390824   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/test/nested/copy/4290/hosts -> /etc/test/nested/copy/4290/hosts
	I1212 00:15:09.390867   48339 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4290
	I1212 00:15:09.398526   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /etc/ssl/certs/42902.pem (1708 bytes)
	I1212 00:15:09.416059   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/test/nested/copy/4290/hosts --> /etc/test/nested/copy/4290/hosts (40 bytes)
	I1212 00:15:09.433237   48339 start.go:296] duration metric: took 169.908089ms for postStartSetup
	I1212 00:15:09.433321   48339 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:15:09.433384   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:09.450800   48339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:15:09.556105   48339 command_runner.go:130] > 14%
	I1212 00:15:09.557034   48339 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:15:09.562380   48339 command_runner.go:130] > 169G
	I1212 00:15:09.562946   48339 fix.go:56] duration metric: took 1.171267005s for fixHost
	I1212 00:15:09.562967   48339 start.go:83] releasing machines lock for "functional-767012", held for 1.171312429s
	I1212 00:15:09.563050   48339 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-767012
	I1212 00:15:09.582602   48339 ssh_runner.go:195] Run: cat /version.json
	I1212 00:15:09.582654   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:09.582889   48339 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:15:09.582947   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:09.601106   48339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:15:09.627042   48339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:15:09.706722   48339 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1212 00:15:09.706847   48339 ssh_runner.go:195] Run: systemctl --version
	I1212 00:15:09.800321   48339 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 00:15:09.800390   48339 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1212 00:15:09.800423   48339 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1212 00:15:09.800514   48339 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 00:15:09.804624   48339 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 00:15:09.804945   48339 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:15:09.805036   48339 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:15:09.812955   48339 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 00:15:09.813030   48339 start.go:496] detecting cgroup driver to use...
	I1212 00:15:09.813095   48339 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 00:15:09.813242   48339 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 00:15:09.829352   48339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 00:15:09.842558   48339 docker.go:218] disabling cri-docker service (if available) ...
	I1212 00:15:09.842620   48339 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:15:09.858553   48339 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:15:09.872251   48339 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:15:10.008398   48339 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:15:10.140361   48339 docker.go:234] disabling docker service ...
	I1212 00:15:10.140425   48339 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:15:10.156860   48339 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:15:10.170461   48339 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:15:10.304156   48339 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:15:10.452566   48339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:15:10.465745   48339 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:15:10.479553   48339 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1212 00:15:10.480868   48339 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 00:15:10.489677   48339 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 00:15:10.498827   48339 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 00:15:10.498939   48339 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 00:15:10.508103   48339 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 00:15:10.516726   48339 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 00:15:10.525281   48339 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 00:15:10.533906   48339 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:15:10.541697   48339 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 00:15:10.550595   48339 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 00:15:10.559645   48339 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 00:15:10.568588   48339 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:15:10.575412   48339 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 00:15:10.576366   48339 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:15:10.583788   48339 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:15:10.698857   48339 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 00:15:10.837222   48339 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1212 00:15:10.837316   48339 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1212 00:15:10.841505   48339 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1212 00:15:10.841543   48339 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 00:15:10.841551   48339 command_runner.go:130] > Device: 0,72	Inode: 1614        Links: 1
	I1212 00:15:10.841558   48339 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 00:15:10.841564   48339 command_runner.go:130] > Access: 2025-12-12 00:15:10.793315522 +0000
	I1212 00:15:10.841569   48339 command_runner.go:130] > Modify: 2025-12-12 00:15:10.793315522 +0000
	I1212 00:15:10.841575   48339 command_runner.go:130] > Change: 2025-12-12 00:15:10.793315522 +0000
	I1212 00:15:10.841583   48339 command_runner.go:130] >  Birth: -
	I1212 00:15:10.841612   48339 start.go:564] Will wait 60s for crictl version
	I1212 00:15:10.841667   48339 ssh_runner.go:195] Run: which crictl
	I1212 00:15:10.845418   48339 command_runner.go:130] > /usr/local/bin/crictl
	I1212 00:15:10.845528   48339 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 00:15:10.867684   48339 command_runner.go:130] > Version:  0.1.0
	I1212 00:15:10.867710   48339 command_runner.go:130] > RuntimeName:  containerd
	I1212 00:15:10.867718   48339 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1212 00:15:10.867725   48339 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 00:15:10.869691   48339 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1212 00:15:10.869761   48339 ssh_runner.go:195] Run: containerd --version
	I1212 00:15:10.889630   48339 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1212 00:15:10.891644   48339 ssh_runner.go:195] Run: containerd --version
	I1212 00:15:10.909520   48339 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1212 00:15:10.917318   48339 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1212 00:15:10.920211   48339 cli_runner.go:164] Run: docker network inspect functional-767012 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:15:10.936971   48339 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 00:15:10.940949   48339 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1212 00:15:10.941183   48339 kubeadm.go:884] updating cluster {Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:15:10.941314   48339 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 00:15:10.941401   48339 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:15:10.964902   48339 command_runner.go:130] > {
	I1212 00:15:10.964923   48339 command_runner.go:130] >   "images":  [
	I1212 00:15:10.964934   48339 command_runner.go:130] >     {
	I1212 00:15:10.964944   48339 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1212 00:15:10.964949   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.964954   48339 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1212 00:15:10.964957   48339 command_runner.go:130] >       ],
	I1212 00:15:10.964962   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.964974   48339 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1212 00:15:10.964977   48339 command_runner.go:130] >       ],
	I1212 00:15:10.964982   48339 command_runner.go:130] >       "size":  "40636774",
	I1212 00:15:10.964989   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.964994   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.965005   48339 command_runner.go:130] >     },
	I1212 00:15:10.965009   48339 command_runner.go:130] >     {
	I1212 00:15:10.965017   48339 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1212 00:15:10.965023   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.965029   48339 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 00:15:10.965032   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965036   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.965047   48339 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1212 00:15:10.965050   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965054   48339 command_runner.go:130] >       "size":  "8034419",
	I1212 00:15:10.965058   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.965062   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.965068   48339 command_runner.go:130] >     },
	I1212 00:15:10.965071   48339 command_runner.go:130] >     {
	I1212 00:15:10.965079   48339 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1212 00:15:10.965085   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.965092   48339 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1212 00:15:10.965095   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965101   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.965112   48339 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1212 00:15:10.965115   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965121   48339 command_runner.go:130] >       "size":  "21168808",
	I1212 00:15:10.965129   48339 command_runner.go:130] >       "username":  "nonroot",
	I1212 00:15:10.965134   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.965137   48339 command_runner.go:130] >     },
	I1212 00:15:10.965143   48339 command_runner.go:130] >     {
	I1212 00:15:10.965152   48339 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1212 00:15:10.965164   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.965169   48339 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1212 00:15:10.965172   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965176   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.965190   48339 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1212 00:15:10.965193   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965199   48339 command_runner.go:130] >       "size":  "21136588",
	I1212 00:15:10.965203   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.965218   48339 command_runner.go:130] >         "value":  "0"
	I1212 00:15:10.965224   48339 command_runner.go:130] >       },
	I1212 00:15:10.965228   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.965231   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.965235   48339 command_runner.go:130] >     },
	I1212 00:15:10.965238   48339 command_runner.go:130] >     {
	I1212 00:15:10.965245   48339 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1212 00:15:10.965251   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.965256   48339 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1212 00:15:10.965262   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965266   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.965274   48339 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1212 00:15:10.965278   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965285   48339 command_runner.go:130] >       "size":  "24678359",
	I1212 00:15:10.965288   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.965296   48339 command_runner.go:130] >         "value":  "0"
	I1212 00:15:10.965302   48339 command_runner.go:130] >       },
	I1212 00:15:10.965306   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.965311   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.965314   48339 command_runner.go:130] >     },
	I1212 00:15:10.965323   48339 command_runner.go:130] >     {
	I1212 00:15:10.965332   48339 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1212 00:15:10.965345   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.965350   48339 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1212 00:15:10.965354   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965358   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.965373   48339 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1212 00:15:10.965377   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965381   48339 command_runner.go:130] >       "size":  "20661043",
	I1212 00:15:10.965385   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.965392   48339 command_runner.go:130] >         "value":  "0"
	I1212 00:15:10.965395   48339 command_runner.go:130] >       },
	I1212 00:15:10.965399   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.965403   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.965406   48339 command_runner.go:130] >     },
	I1212 00:15:10.965412   48339 command_runner.go:130] >     {
	I1212 00:15:10.965420   48339 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1212 00:15:10.965426   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.965431   48339 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1212 00:15:10.965434   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965438   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.965446   48339 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1212 00:15:10.965453   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965457   48339 command_runner.go:130] >       "size":  "22429671",
	I1212 00:15:10.965461   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.965465   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.965469   48339 command_runner.go:130] >     },
	I1212 00:15:10.965475   48339 command_runner.go:130] >     {
	I1212 00:15:10.965482   48339 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1212 00:15:10.965486   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.965492   48339 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1212 00:15:10.965497   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965502   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.965515   48339 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1212 00:15:10.965522   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965526   48339 command_runner.go:130] >       "size":  "15391364",
	I1212 00:15:10.965530   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.965534   48339 command_runner.go:130] >         "value":  "0"
	I1212 00:15:10.965539   48339 command_runner.go:130] >       },
	I1212 00:15:10.965543   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.965553   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.965556   48339 command_runner.go:130] >     },
	I1212 00:15:10.965559   48339 command_runner.go:130] >     {
	I1212 00:15:10.965566   48339 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1212 00:15:10.965570   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.965574   48339 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1212 00:15:10.965578   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965582   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.965591   48339 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1212 00:15:10.965602   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965606   48339 command_runner.go:130] >       "size":  "267939",
	I1212 00:15:10.965610   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.965614   48339 command_runner.go:130] >         "value":  "65535"
	I1212 00:15:10.965617   48339 command_runner.go:130] >       },
	I1212 00:15:10.965628   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.965632   48339 command_runner.go:130] >       "pinned":  true
	I1212 00:15:10.965635   48339 command_runner.go:130] >     }
	I1212 00:15:10.965638   48339 command_runner.go:130] >   ]
	I1212 00:15:10.965640   48339 command_runner.go:130] > }
	I1212 00:15:10.968555   48339 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 00:15:10.968581   48339 containerd.go:534] Images already preloaded, skipping extraction
	I1212 00:15:10.968640   48339 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:15:10.995305   48339 command_runner.go:130] > {
	I1212 00:15:10.995329   48339 command_runner.go:130] >   "images":  [
	I1212 00:15:10.995334   48339 command_runner.go:130] >     {
	I1212 00:15:10.995344   48339 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1212 00:15:10.995349   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.995355   48339 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1212 00:15:10.995359   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995375   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.995392   48339 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1212 00:15:10.995395   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995400   48339 command_runner.go:130] >       "size":  "40636774",
	I1212 00:15:10.995404   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.995408   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.995414   48339 command_runner.go:130] >     },
	I1212 00:15:10.995418   48339 command_runner.go:130] >     {
	I1212 00:15:10.995429   48339 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1212 00:15:10.995438   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.995444   48339 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 00:15:10.995448   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995452   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.995466   48339 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1212 00:15:10.995470   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995475   48339 command_runner.go:130] >       "size":  "8034419",
	I1212 00:15:10.995483   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.995487   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.995490   48339 command_runner.go:130] >     },
	I1212 00:15:10.995493   48339 command_runner.go:130] >     {
	I1212 00:15:10.995500   48339 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1212 00:15:10.995506   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.995512   48339 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1212 00:15:10.995515   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995524   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.995536   48339 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1212 00:15:10.995540   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995544   48339 command_runner.go:130] >       "size":  "21168808",
	I1212 00:15:10.995554   48339 command_runner.go:130] >       "username":  "nonroot",
	I1212 00:15:10.995558   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.995561   48339 command_runner.go:130] >     },
	I1212 00:15:10.995564   48339 command_runner.go:130] >     {
	I1212 00:15:10.995572   48339 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1212 00:15:10.995583   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.995588   48339 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1212 00:15:10.995592   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995596   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.995603   48339 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1212 00:15:10.995611   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995615   48339 command_runner.go:130] >       "size":  "21136588",
	I1212 00:15:10.995619   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.995623   48339 command_runner.go:130] >         "value":  "0"
	I1212 00:15:10.995631   48339 command_runner.go:130] >       },
	I1212 00:15:10.995635   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.995639   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.995642   48339 command_runner.go:130] >     },
	I1212 00:15:10.995646   48339 command_runner.go:130] >     {
	I1212 00:15:10.995659   48339 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1212 00:15:10.995663   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.995678   48339 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1212 00:15:10.995687   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995692   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.995701   48339 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1212 00:15:10.995709   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995713   48339 command_runner.go:130] >       "size":  "24678359",
	I1212 00:15:10.995716   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.995727   48339 command_runner.go:130] >         "value":  "0"
	I1212 00:15:10.995734   48339 command_runner.go:130] >       },
	I1212 00:15:10.995738   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.995743   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.995746   48339 command_runner.go:130] >     },
	I1212 00:15:10.995749   48339 command_runner.go:130] >     {
	I1212 00:15:10.995756   48339 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1212 00:15:10.995762   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.995768   48339 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1212 00:15:10.995771   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995782   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.995795   48339 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1212 00:15:10.995798   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995802   48339 command_runner.go:130] >       "size":  "20661043",
	I1212 00:15:10.995811   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.995815   48339 command_runner.go:130] >         "value":  "0"
	I1212 00:15:10.995820   48339 command_runner.go:130] >       },
	I1212 00:15:10.995830   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.995834   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.995838   48339 command_runner.go:130] >     },
	I1212 00:15:10.995841   48339 command_runner.go:130] >     {
	I1212 00:15:10.995847   48339 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1212 00:15:10.995854   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.995859   48339 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1212 00:15:10.995863   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995867   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.995877   48339 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1212 00:15:10.995884   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995888   48339 command_runner.go:130] >       "size":  "22429671",
	I1212 00:15:10.995893   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.995902   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.995906   48339 command_runner.go:130] >     },
	I1212 00:15:10.995909   48339 command_runner.go:130] >     {
	I1212 00:15:10.995916   48339 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1212 00:15:10.995924   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.995929   48339 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1212 00:15:10.995933   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995937   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.995948   48339 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1212 00:15:10.995952   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995956   48339 command_runner.go:130] >       "size":  "15391364",
	I1212 00:15:10.995963   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.995967   48339 command_runner.go:130] >         "value":  "0"
	I1212 00:15:10.995983   48339 command_runner.go:130] >       },
	I1212 00:15:10.995993   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.995997   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.996001   48339 command_runner.go:130] >     },
	I1212 00:15:10.996004   48339 command_runner.go:130] >     {
	I1212 00:15:10.996011   48339 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1212 00:15:10.996020   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.996025   48339 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1212 00:15:10.996029   48339 command_runner.go:130] >       ],
	I1212 00:15:10.996033   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.996046   48339 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1212 00:15:10.996053   48339 command_runner.go:130] >       ],
	I1212 00:15:10.996057   48339 command_runner.go:130] >       "size":  "267939",
	I1212 00:15:10.996061   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.996065   48339 command_runner.go:130] >         "value":  "65535"
	I1212 00:15:10.996074   48339 command_runner.go:130] >       },
	I1212 00:15:10.996078   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.996086   48339 command_runner.go:130] >       "pinned":  true
	I1212 00:15:10.996089   48339 command_runner.go:130] >     }
	I1212 00:15:10.996095   48339 command_runner.go:130] >   ]
	I1212 00:15:10.996103   48339 command_runner.go:130] > }
	I1212 00:15:10.997943   48339 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 00:15:10.997972   48339 cache_images.go:86] Images are preloaded, skipping loading
	I1212 00:15:10.997981   48339 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1212 00:15:10.998119   48339 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-767012 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:15:10.998212   48339 ssh_runner.go:195] Run: sudo crictl info
	I1212 00:15:11.021367   48339 command_runner.go:130] > {
	I1212 00:15:11.021387   48339 command_runner.go:130] >   "cniconfig": {
	I1212 00:15:11.021393   48339 command_runner.go:130] >     "Networks": [
	I1212 00:15:11.021397   48339 command_runner.go:130] >       {
	I1212 00:15:11.021403   48339 command_runner.go:130] >         "Config": {
	I1212 00:15:11.021408   48339 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1212 00:15:11.021413   48339 command_runner.go:130] >           "Name": "cni-loopback",
	I1212 00:15:11.021418   48339 command_runner.go:130] >           "Plugins": [
	I1212 00:15:11.021422   48339 command_runner.go:130] >             {
	I1212 00:15:11.021426   48339 command_runner.go:130] >               "Network": {
	I1212 00:15:11.021430   48339 command_runner.go:130] >                 "ipam": {},
	I1212 00:15:11.021438   48339 command_runner.go:130] >                 "type": "loopback"
	I1212 00:15:11.021445   48339 command_runner.go:130] >               },
	I1212 00:15:11.021450   48339 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1212 00:15:11.021457   48339 command_runner.go:130] >             }
	I1212 00:15:11.021461   48339 command_runner.go:130] >           ],
	I1212 00:15:11.021470   48339 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1212 00:15:11.021474   48339 command_runner.go:130] >         },
	I1212 00:15:11.021485   48339 command_runner.go:130] >         "IFName": "lo"
	I1212 00:15:11.021489   48339 command_runner.go:130] >       }
	I1212 00:15:11.021493   48339 command_runner.go:130] >     ],
	I1212 00:15:11.021498   48339 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1212 00:15:11.021504   48339 command_runner.go:130] >     "PluginDirs": [
	I1212 00:15:11.021509   48339 command_runner.go:130] >       "/opt/cni/bin"
	I1212 00:15:11.021514   48339 command_runner.go:130] >     ],
	I1212 00:15:11.021525   48339 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1212 00:15:11.021533   48339 command_runner.go:130] >     "Prefix": "eth"
	I1212 00:15:11.021537   48339 command_runner.go:130] >   },
	I1212 00:15:11.021540   48339 command_runner.go:130] >   "config": {
	I1212 00:15:11.021546   48339 command_runner.go:130] >     "cdiSpecDirs": [
	I1212 00:15:11.021552   48339 command_runner.go:130] >       "/etc/cdi",
	I1212 00:15:11.021558   48339 command_runner.go:130] >       "/var/run/cdi"
	I1212 00:15:11.021560   48339 command_runner.go:130] >     ],
	I1212 00:15:11.021563   48339 command_runner.go:130] >     "cni": {
	I1212 00:15:11.021567   48339 command_runner.go:130] >       "binDir": "",
	I1212 00:15:11.021571   48339 command_runner.go:130] >       "binDirs": [
	I1212 00:15:11.021574   48339 command_runner.go:130] >         "/opt/cni/bin"
	I1212 00:15:11.021577   48339 command_runner.go:130] >       ],
	I1212 00:15:11.021582   48339 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1212 00:15:11.021585   48339 command_runner.go:130] >       "confTemplate": "",
	I1212 00:15:11.021589   48339 command_runner.go:130] >       "ipPref": "",
	I1212 00:15:11.021592   48339 command_runner.go:130] >       "maxConfNum": 1,
	I1212 00:15:11.021597   48339 command_runner.go:130] >       "setupSerially": false,
	I1212 00:15:11.021601   48339 command_runner.go:130] >       "useInternalLoopback": false
	I1212 00:15:11.021604   48339 command_runner.go:130] >     },
	I1212 00:15:11.021610   48339 command_runner.go:130] >     "containerd": {
	I1212 00:15:11.021614   48339 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1212 00:15:11.021619   48339 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1212 00:15:11.021624   48339 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1212 00:15:11.021627   48339 command_runner.go:130] >       "runtimes": {
	I1212 00:15:11.021630   48339 command_runner.go:130] >         "runc": {
	I1212 00:15:11.021635   48339 command_runner.go:130] >           "ContainerAnnotations": null,
	I1212 00:15:11.021639   48339 command_runner.go:130] >           "PodAnnotations": null,
	I1212 00:15:11.021644   48339 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1212 00:15:11.021648   48339 command_runner.go:130] >           "cgroupWritable": false,
	I1212 00:15:11.021652   48339 command_runner.go:130] >           "cniConfDir": "",
	I1212 00:15:11.021656   48339 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1212 00:15:11.021664   48339 command_runner.go:130] >           "io_type": "",
	I1212 00:15:11.021670   48339 command_runner.go:130] >           "options": {
	I1212 00:15:11.021675   48339 command_runner.go:130] >             "BinaryName": "",
	I1212 00:15:11.021683   48339 command_runner.go:130] >             "CriuImagePath": "",
	I1212 00:15:11.021695   48339 command_runner.go:130] >             "CriuWorkPath": "",
	I1212 00:15:11.021703   48339 command_runner.go:130] >             "IoGid": 0,
	I1212 00:15:11.021708   48339 command_runner.go:130] >             "IoUid": 0,
	I1212 00:15:11.021712   48339 command_runner.go:130] >             "NoNewKeyring": false,
	I1212 00:15:11.021716   48339 command_runner.go:130] >             "Root": "",
	I1212 00:15:11.021723   48339 command_runner.go:130] >             "ShimCgroup": "",
	I1212 00:15:11.021728   48339 command_runner.go:130] >             "SystemdCgroup": false
	I1212 00:15:11.021734   48339 command_runner.go:130] >           },
	I1212 00:15:11.021739   48339 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1212 00:15:11.021745   48339 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1212 00:15:11.021749   48339 command_runner.go:130] >           "runtimePath": "",
	I1212 00:15:11.021755   48339 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1212 00:15:11.021761   48339 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1212 00:15:11.021765   48339 command_runner.go:130] >           "snapshotter": ""
	I1212 00:15:11.021770   48339 command_runner.go:130] >         }
	I1212 00:15:11.021774   48339 command_runner.go:130] >       }
	I1212 00:15:11.021778   48339 command_runner.go:130] >     },
	I1212 00:15:11.021790   48339 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1212 00:15:11.021799   48339 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1212 00:15:11.021805   48339 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1212 00:15:11.021810   48339 command_runner.go:130] >     "disableApparmor": false,
	I1212 00:15:11.021816   48339 command_runner.go:130] >     "disableHugetlbController": true,
	I1212 00:15:11.021821   48339 command_runner.go:130] >     "disableProcMount": false,
	I1212 00:15:11.021825   48339 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1212 00:15:11.021828   48339 command_runner.go:130] >     "enableCDI": true,
	I1212 00:15:11.021832   48339 command_runner.go:130] >     "enableSelinux": false,
	I1212 00:15:11.021840   48339 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1212 00:15:11.021845   48339 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1212 00:15:11.021852   48339 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1212 00:15:11.021858   48339 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1212 00:15:11.021868   48339 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1212 00:15:11.021873   48339 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1212 00:15:11.021877   48339 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1212 00:15:11.021886   48339 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1212 00:15:11.021890   48339 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1212 00:15:11.021896   48339 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1212 00:15:11.021901   48339 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1212 00:15:11.021907   48339 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1212 00:15:11.021910   48339 command_runner.go:130] >   },
	I1212 00:15:11.021914   48339 command_runner.go:130] >   "features": {
	I1212 00:15:11.021919   48339 command_runner.go:130] >     "supplemental_groups_policy": true
	I1212 00:15:11.021922   48339 command_runner.go:130] >   },
	I1212 00:15:11.021926   48339 command_runner.go:130] >   "golang": "go1.24.9",
	I1212 00:15:11.021938   48339 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1212 00:15:11.021951   48339 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1212 00:15:11.021954   48339 command_runner.go:130] >   "runtimeHandlers": [
	I1212 00:15:11.021957   48339 command_runner.go:130] >     {
	I1212 00:15:11.021961   48339 command_runner.go:130] >       "features": {
	I1212 00:15:11.021973   48339 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1212 00:15:11.021977   48339 command_runner.go:130] >         "user_namespaces": true
	I1212 00:15:11.021984   48339 command_runner.go:130] >       }
	I1212 00:15:11.021991   48339 command_runner.go:130] >     },
	I1212 00:15:11.021996   48339 command_runner.go:130] >     {
	I1212 00:15:11.022000   48339 command_runner.go:130] >       "features": {
	I1212 00:15:11.022006   48339 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1212 00:15:11.022013   48339 command_runner.go:130] >         "user_namespaces": true
	I1212 00:15:11.022016   48339 command_runner.go:130] >       },
	I1212 00:15:11.022021   48339 command_runner.go:130] >       "name": "runc"
	I1212 00:15:11.022026   48339 command_runner.go:130] >     }
	I1212 00:15:11.022029   48339 command_runner.go:130] >   ],
	I1212 00:15:11.022033   48339 command_runner.go:130] >   "status": {
	I1212 00:15:11.022045   48339 command_runner.go:130] >     "conditions": [
	I1212 00:15:11.022048   48339 command_runner.go:130] >       {
	I1212 00:15:11.022055   48339 command_runner.go:130] >         "message": "",
	I1212 00:15:11.022059   48339 command_runner.go:130] >         "reason": "",
	I1212 00:15:11.022065   48339 command_runner.go:130] >         "status": true,
	I1212 00:15:11.022070   48339 command_runner.go:130] >         "type": "RuntimeReady"
	I1212 00:15:11.022073   48339 command_runner.go:130] >       },
	I1212 00:15:11.022076   48339 command_runner.go:130] >       {
	I1212 00:15:11.022083   48339 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1212 00:15:11.022087   48339 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1212 00:15:11.022094   48339 command_runner.go:130] >         "status": false,
	I1212 00:15:11.022099   48339 command_runner.go:130] >         "type": "NetworkReady"
	I1212 00:15:11.022104   48339 command_runner.go:130] >       },
	I1212 00:15:11.022107   48339 command_runner.go:130] >       {
	I1212 00:15:11.022132   48339 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1212 00:15:11.022141   48339 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1212 00:15:11.022149   48339 command_runner.go:130] >         "status": false,
	I1212 00:15:11.022155   48339 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1212 00:15:11.022158   48339 command_runner.go:130] >       }
	I1212 00:15:11.022161   48339 command_runner.go:130] >     ]
	I1212 00:15:11.022164   48339 command_runner.go:130] >   }
	I1212 00:15:11.022166   48339 command_runner.go:130] > }
	I1212 00:15:11.024522   48339 cni.go:84] Creating CNI manager for ""
	I1212 00:15:11.024547   48339 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 00:15:11.024564   48339 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 00:15:11.024607   48339 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-767012 NodeName:functional-767012 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:15:11.024773   48339 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-767012"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:15:11.024850   48339 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 00:15:11.031979   48339 command_runner.go:130] > kubeadm
	I1212 00:15:11.031999   48339 command_runner.go:130] > kubectl
	I1212 00:15:11.032004   48339 command_runner.go:130] > kubelet
	I1212 00:15:11.033031   48339 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 00:15:11.033131   48339 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:15:11.041032   48339 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1212 00:15:11.054723   48339 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 00:15:11.067854   48339 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1212 00:15:11.081373   48339 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:15:11.085014   48339 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1212 00:15:11.085116   48339 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:15:11.226173   48339 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:15:12.035778   48339 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012 for IP: 192.168.49.2
	I1212 00:15:12.035798   48339 certs.go:195] generating shared ca certs ...
	I1212 00:15:12.035830   48339 certs.go:227] acquiring lock for ca certs: {Name:mk18ed2fce74cbc4ee01c0f71e2dbdd98ccce1cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:15:12.035967   48339 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key
	I1212 00:15:12.036010   48339 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key
	I1212 00:15:12.036017   48339 certs.go:257] generating profile certs ...
	I1212 00:15:12.036117   48339 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.key
	I1212 00:15:12.036165   48339 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.key.fcbff5a4
	I1212 00:15:12.036201   48339 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.key
	I1212 00:15:12.036209   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 00:15:12.036224   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 00:15:12.036235   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 00:15:12.036248   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 00:15:12.036258   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 00:15:12.036270   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 00:15:12.036281   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 00:15:12.036294   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 00:15:12.036341   48339 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem (1338 bytes)
	W1212 00:15:12.036372   48339 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290_empty.pem, impossibly tiny 0 bytes
	I1212 00:15:12.036381   48339 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 00:15:12.036409   48339 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem (1082 bytes)
	I1212 00:15:12.036440   48339 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:15:12.036468   48339 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem (1675 bytes)
	I1212 00:15:12.036516   48339 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem (1708 bytes)
	I1212 00:15:12.036546   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem -> /usr/share/ca-certificates/4290.pem
	I1212 00:15:12.036558   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem -> /usr/share/ca-certificates/42902.pem
	I1212 00:15:12.036578   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:15:12.037134   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:15:12.059224   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 00:15:12.079145   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:15:12.096868   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:15:12.114531   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 00:15:12.132828   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 00:15:12.150161   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:15:12.168014   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:15:12.185251   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem --> /usr/share/ca-certificates/4290.pem (1338 bytes)
	I1212 00:15:12.202557   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /usr/share/ca-certificates/42902.pem (1708 bytes)
	I1212 00:15:12.219625   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:15:12.237574   48339 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:15:12.250472   48339 ssh_runner.go:195] Run: openssl version
	I1212 00:15:12.256541   48339 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1212 00:15:12.256947   48339 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4290.pem
	I1212 00:15:12.264387   48339 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4290.pem /etc/ssl/certs/4290.pem
	I1212 00:15:12.271688   48339 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4290.pem
	I1212 00:15:12.275404   48339 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 00:06 /usr/share/ca-certificates/4290.pem
	I1212 00:15:12.275432   48339 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:06 /usr/share/ca-certificates/4290.pem
	I1212 00:15:12.275482   48339 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4290.pem
	I1212 00:15:12.315860   48339 command_runner.go:130] > 51391683
	I1212 00:15:12.316400   48339 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 00:15:12.323656   48339 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42902.pem
	I1212 00:15:12.330945   48339 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42902.pem /etc/ssl/certs/42902.pem
	I1212 00:15:12.339131   48339 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42902.pem
	I1212 00:15:12.343064   48339 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 00:06 /usr/share/ca-certificates/42902.pem
	I1212 00:15:12.343159   48339 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:06 /usr/share/ca-certificates/42902.pem
	I1212 00:15:12.343241   48339 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42902.pem
	I1212 00:15:12.383845   48339 command_runner.go:130] > 3ec20f2e
	I1212 00:15:12.384302   48339 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 00:15:12.391740   48339 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:15:12.398710   48339 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:15:12.406076   48339 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:15:12.409726   48339 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:15:12.409770   48339 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:15:12.409826   48339 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:15:12.450507   48339 command_runner.go:130] > b5213941
	I1212 00:15:12.450926   48339 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:15:12.458188   48339 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:15:12.461873   48339 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:15:12.461949   48339 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1212 00:15:12.461961   48339 command_runner.go:130] > Device: 259,1	Inode: 1311423     Links: 1
	I1212 00:15:12.461969   48339 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 00:15:12.461975   48339 command_runner.go:130] > Access: 2025-12-12 00:11:05.099200071 +0000
	I1212 00:15:12.461979   48339 command_runner.go:130] > Modify: 2025-12-12 00:07:00.969098600 +0000
	I1212 00:15:12.461984   48339 command_runner.go:130] > Change: 2025-12-12 00:07:00.969098600 +0000
	I1212 00:15:12.461989   48339 command_runner.go:130] >  Birth: 2025-12-12 00:07:00.969098600 +0000
	I1212 00:15:12.462077   48339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 00:15:12.504549   48339 command_runner.go:130] > Certificate will not expire
	I1212 00:15:12.505002   48339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 00:15:12.545847   48339 command_runner.go:130] > Certificate will not expire
	I1212 00:15:12.545927   48339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 00:15:12.586405   48339 command_runner.go:130] > Certificate will not expire
	I1212 00:15:12.586767   48339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 00:15:12.629151   48339 command_runner.go:130] > Certificate will not expire
	I1212 00:15:12.629637   48339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 00:15:12.671966   48339 command_runner.go:130] > Certificate will not expire
	I1212 00:15:12.672529   48339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 00:15:12.713858   48339 command_runner.go:130] > Certificate will not expire
	I1212 00:15:12.714272   48339 kubeadm.go:401] StartCluster: {Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:15:12.714367   48339 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1212 00:15:12.714442   48339 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:15:12.749902   48339 cri.go:89] found id: ""
	I1212 00:15:12.750000   48339 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:15:12.759407   48339 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1212 00:15:12.759429   48339 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1212 00:15:12.759437   48339 command_runner.go:130] > /var/lib/minikube/etcd:
	I1212 00:15:12.760379   48339 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 00:15:12.760398   48339 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 00:15:12.760457   48339 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 00:15:12.768161   48339 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:15:12.768602   48339 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-767012" does not appear in /home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 00:15:12.768706   48339 kubeconfig.go:62] /home/jenkins/minikube-integration/22101-2343/kubeconfig needs updating (will repair): [kubeconfig missing "functional-767012" cluster setting kubeconfig missing "functional-767012" context setting]
	I1212 00:15:12.769002   48339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/kubeconfig: {Name:mk3e2cbffa089f0a26351f5f6a49b8ed3b66edd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:15:12.769434   48339 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 00:15:12.769575   48339 kapi.go:59] client config for functional-767012: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt", KeyFile:"/home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.key", CAFile:"/home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 00:15:12.770098   48339 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 00:15:12.770119   48339 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 00:15:12.770125   48339 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 00:15:12.770129   48339 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 00:15:12.770134   48339 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 00:15:12.770402   48339 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 00:15:12.770508   48339 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1212 00:15:12.778529   48339 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1212 00:15:12.778562   48339 kubeadm.go:602] duration metric: took 18.158491ms to restartPrimaryControlPlane
	I1212 00:15:12.778572   48339 kubeadm.go:403] duration metric: took 64.30535ms to StartCluster
	I1212 00:15:12.778619   48339 settings.go:142] acquiring lock: {Name:mk6dd4250df69aeba4752e9f33aeef37272375c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:15:12.778710   48339 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 00:15:12.779343   48339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/kubeconfig: {Name:mk3e2cbffa089f0a26351f5f6a49b8ed3b66edd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:15:12.779578   48339 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1212 00:15:12.779758   48339 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 00:15:12.779798   48339 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 00:15:12.779860   48339 addons.go:70] Setting storage-provisioner=true in profile "functional-767012"
	I1212 00:15:12.779873   48339 addons.go:239] Setting addon storage-provisioner=true in "functional-767012"
	I1212 00:15:12.779899   48339 host.go:66] Checking if "functional-767012" exists ...
	I1212 00:15:12.780379   48339 cli_runner.go:164] Run: docker container inspect functional-767012 --format={{.State.Status}}
	I1212 00:15:12.780789   48339 addons.go:70] Setting default-storageclass=true in profile "functional-767012"
	I1212 00:15:12.780811   48339 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-767012"
	I1212 00:15:12.781090   48339 cli_runner.go:164] Run: docker container inspect functional-767012 --format={{.State.Status}}
	I1212 00:15:12.784774   48339 out.go:179] * Verifying Kubernetes components...
	I1212 00:15:12.788318   48339 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:15:12.822440   48339 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 00:15:12.822619   48339 kapi.go:59] client config for functional-767012: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt", KeyFile:"/home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.key", CAFile:"/home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 00:15:12.822882   48339 addons.go:239] Setting addon default-storageclass=true in "functional-767012"
	I1212 00:15:12.822910   48339 host.go:66] Checking if "functional-767012" exists ...
	I1212 00:15:12.823362   48339 cli_runner.go:164] Run: docker container inspect functional-767012 --format={{.State.Status}}
	I1212 00:15:12.828706   48339 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:15:12.831719   48339 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:12.831746   48339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 00:15:12.831810   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:12.856565   48339 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:12.856586   48339 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:15:12.856663   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:12.891591   48339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:15:12.907113   48339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:15:13.031282   48339 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:15:13.038860   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:13.055219   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:13.785959   48339 node_ready.go:35] waiting up to 6m0s for node "functional-767012" to be "Ready" ...
	I1212 00:15:13.786096   48339 type.go:168] "Request Body" body=""
	I1212 00:15:13.786201   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:13.786332   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:13.786513   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:13.786544   48339 retry.go:31] will retry after 252.334378ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:13.786634   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:13.786678   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:13.786692   48339 retry.go:31] will retry after 187.958053ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:13.786725   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:13.975259   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:14.039772   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:14.044477   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:14.044582   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:14.044648   48339 retry.go:31] will retry after 322.190642ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:14.103040   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:14.103100   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:14.103119   48339 retry.go:31] will retry after 449.616448ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:14.286283   48339 type.go:168] "Request Body" body=""
	I1212 00:15:14.286357   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:14.286666   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:14.367911   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:14.423058   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:14.426726   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:14.426805   48339 retry.go:31] will retry after 304.882295ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:14.552989   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:14.624219   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:14.624296   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:14.624324   48339 retry.go:31] will retry after 431.233251ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:14.732500   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:14.787073   48339 type.go:168] "Request Body" body=""
	I1212 00:15:14.787160   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:14.787408   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:14.793570   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:14.793617   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:14.793638   48339 retry.go:31] will retry after 814.242182ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:15.055819   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:15.115988   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:15.119844   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:15.119920   48339 retry.go:31] will retry after 1.173578041s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:15.287015   48339 type.go:168] "Request Body" body=""
	I1212 00:15:15.287127   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:15.287435   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:15.608995   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:15.668352   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:15.672074   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:15.672106   48339 retry.go:31] will retry after 987.735436ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:15.786224   48339 type.go:168] "Request Body" body=""
	I1212 00:15:15.786336   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:15.786676   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:15.786781   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:16.286218   48339 type.go:168] "Request Body" body=""
	I1212 00:15:16.286309   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:16.286618   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:16.293963   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:16.350242   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:16.354044   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:16.354074   48339 retry.go:31] will retry after 1.703488512s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:16.660633   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:16.720806   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:16.720847   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:16.720866   48339 retry.go:31] will retry after 1.717481089s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:16.787045   48339 type.go:168] "Request Body" body=""
	I1212 00:15:16.787165   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:16.787500   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:17.287197   48339 type.go:168] "Request Body" body=""
	I1212 00:15:17.287287   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:17.287663   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:17.786193   48339 type.go:168] "Request Body" body=""
	I1212 00:15:17.786301   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:17.786622   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:18.058032   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:18.119712   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:18.119758   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:18.119777   48339 retry.go:31] will retry after 2.564790813s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:18.286189   48339 type.go:168] "Request Body" body=""
	I1212 00:15:18.286256   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:18.286531   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:18.286571   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:18.438948   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:18.492343   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:18.495818   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:18.495853   48339 retry.go:31] will retry after 3.474173077s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:18.786235   48339 type.go:168] "Request Body" body=""
	I1212 00:15:18.786319   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:18.786633   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:19.286373   48339 type.go:168] "Request Body" body=""
	I1212 00:15:19.286489   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:19.286915   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:19.786192   48339 type.go:168] "Request Body" body=""
	I1212 00:15:19.786262   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:19.786531   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:20.286266   48339 type.go:168] "Request Body" body=""
	I1212 00:15:20.286338   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:20.286671   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:20.286730   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:20.685395   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:20.744336   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:20.744377   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:20.744397   48339 retry.go:31] will retry after 3.068053389s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:20.786556   48339 type.go:168] "Request Body" body=""
	I1212 00:15:20.786632   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:20.787017   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:21.286794   48339 type.go:168] "Request Body" body=""
	I1212 00:15:21.286863   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:21.287178   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:21.786938   48339 type.go:168] "Request Body" body=""
	I1212 00:15:21.787095   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:21.787425   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:21.970778   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:22.029300   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:22.033382   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:22.033416   48339 retry.go:31] will retry after 3.143683139s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:22.286887   48339 type.go:168] "Request Body" body=""
	I1212 00:15:22.286963   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:22.287298   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:22.287349   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:22.786122   48339 type.go:168] "Request Body" body=""
	I1212 00:15:22.786203   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:22.786515   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:23.286522   48339 type.go:168] "Request Body" body=""
	I1212 00:15:23.286595   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:23.286902   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:23.786669   48339 type.go:168] "Request Body" body=""
	I1212 00:15:23.786750   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:23.787071   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:23.813245   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:23.872447   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:23.872484   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:23.872503   48339 retry.go:31] will retry after 4.295118946s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:24.286878   48339 type.go:168] "Request Body" body=""
	I1212 00:15:24.286966   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:24.287236   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:24.787020   48339 type.go:168] "Request Body" body=""
	I1212 00:15:24.787113   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:24.787396   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:24.787455   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:25.178129   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:25.240141   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:25.243777   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:25.243806   48339 retry.go:31] will retry after 9.168145583s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:25.286119   48339 type.go:168] "Request Body" body=""
	I1212 00:15:25.286212   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:25.286559   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:25.787134   48339 type.go:168] "Request Body" body=""
	I1212 00:15:25.787314   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:25.787683   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:26.286268   48339 type.go:168] "Request Body" body=""
	I1212 00:15:26.286357   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:26.286692   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:26.786194   48339 type.go:168] "Request Body" body=""
	I1212 00:15:26.786286   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:26.786601   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:27.286932   48339 type.go:168] "Request Body" body=""
	I1212 00:15:27.287015   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:27.287267   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:27.287315   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:27.787077   48339 type.go:168] "Request Body" body=""
	I1212 00:15:27.787176   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:27.787513   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:28.168008   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:28.231881   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:28.231917   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:28.231944   48339 retry.go:31] will retry after 6.344313185s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:28.286314   48339 type.go:168] "Request Body" body=""
	I1212 00:15:28.286400   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:28.286700   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:28.786192   48339 type.go:168] "Request Body" body=""
	I1212 00:15:28.786267   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:28.786531   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:29.286238   48339 type.go:168] "Request Body" body=""
	I1212 00:15:29.286308   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:29.286623   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:29.786295   48339 type.go:168] "Request Body" body=""
	I1212 00:15:29.786368   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:29.786689   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:29.786753   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:30.287110   48339 type.go:168] "Request Body" body=""
	I1212 00:15:30.287175   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:30.287426   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:30.786872   48339 type.go:168] "Request Body" body=""
	I1212 00:15:30.786960   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:30.787297   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:31.286942   48339 type.go:168] "Request Body" body=""
	I1212 00:15:31.287032   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:31.287368   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:31.786980   48339 type.go:168] "Request Body" body=""
	I1212 00:15:31.787074   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:31.787418   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:31.787478   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:32.286186   48339 type.go:168] "Request Body" body=""
	I1212 00:15:32.286271   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:32.286599   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:32.786430   48339 type.go:168] "Request Body" body=""
	I1212 00:15:32.786534   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:32.786856   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:33.286674   48339 type.go:168] "Request Body" body=""
	I1212 00:15:33.286767   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:33.287049   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:33.786796   48339 type.go:168] "Request Body" body=""
	I1212 00:15:33.786868   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:33.787225   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:34.286903   48339 type.go:168] "Request Body" body=""
	I1212 00:15:34.287005   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:34.287348   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:34.287421   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:34.412873   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:34.471886   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:34.475429   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:34.475459   48339 retry.go:31] will retry after 5.427832253s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:34.576727   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:34.645023   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:34.645064   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:34.645084   48339 retry.go:31] will retry after 14.315988892s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:34.786162   48339 type.go:168] "Request Body" body=""
	I1212 00:15:34.786245   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:34.786506   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:35.286256   48339 type.go:168] "Request Body" body=""
	I1212 00:15:35.286369   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:35.286766   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:35.786480   48339 type.go:168] "Request Body" body=""
	I1212 00:15:35.786551   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:35.786861   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:36.286546   48339 type.go:168] "Request Body" body=""
	I1212 00:15:36.286613   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:36.286890   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:36.786199   48339 type.go:168] "Request Body" body=""
	I1212 00:15:36.786309   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:36.786640   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:36.786704   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:37.286243   48339 type.go:168] "Request Body" body=""
	I1212 00:15:37.286323   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:37.286640   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:37.786331   48339 type.go:168] "Request Body" body=""
	I1212 00:15:37.786426   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:37.786691   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:38.286739   48339 type.go:168] "Request Body" body=""
	I1212 00:15:38.286834   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:38.287212   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:38.787067   48339 type.go:168] "Request Body" body=""
	I1212 00:15:38.787165   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:38.787505   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:38.787556   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:39.286897   48339 type.go:168] "Request Body" body=""
	I1212 00:15:39.286974   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:39.287246   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:39.787072   48339 type.go:168] "Request Body" body=""
	I1212 00:15:39.787155   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:39.787481   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:39.903977   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:39.961517   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:39.961553   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:39.961584   48339 retry.go:31] will retry after 9.825060256s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:40.286904   48339 type.go:168] "Request Body" body=""
	I1212 00:15:40.287016   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:40.287324   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:40.786920   48339 type.go:168] "Request Body" body=""
	I1212 00:15:40.787007   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:40.787265   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:41.287079   48339 type.go:168] "Request Body" body=""
	I1212 00:15:41.287171   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:41.287483   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:41.287535   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:41.786177   48339 type.go:168] "Request Body" body=""
	I1212 00:15:41.786245   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:41.786586   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:42.286210   48339 type.go:168] "Request Body" body=""
	I1212 00:15:42.286304   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:42.286665   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:42.786373   48339 type.go:168] "Request Body" body=""
	I1212 00:15:42.786449   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:42.786735   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:43.286695   48339 type.go:168] "Request Body" body=""
	I1212 00:15:43.286781   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:43.287063   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:43.786792   48339 type.go:168] "Request Body" body=""
	I1212 00:15:43.786867   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:43.787142   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:43.787197   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:44.286976   48339 type.go:168] "Request Body" body=""
	I1212 00:15:44.287083   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:44.287398   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:44.786120   48339 type.go:168] "Request Body" body=""
	I1212 00:15:44.786194   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:44.786513   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:45.286282   48339 type.go:168] "Request Body" body=""
	I1212 00:15:45.286447   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:45.286824   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:45.786533   48339 type.go:168] "Request Body" body=""
	I1212 00:15:45.786632   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:45.786951   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:46.286792   48339 type.go:168] "Request Body" body=""
	I1212 00:15:46.286884   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:46.287186   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:46.287237   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:46.786874   48339 type.go:168] "Request Body" body=""
	I1212 00:15:46.786956   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:46.787268   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:47.287109   48339 type.go:168] "Request Body" body=""
	I1212 00:15:47.287201   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:47.287499   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:47.786233   48339 type.go:168] "Request Body" body=""
	I1212 00:15:47.786303   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:47.786629   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:48.286436   48339 type.go:168] "Request Body" body=""
	I1212 00:15:48.286503   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:48.286772   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:48.786216   48339 type.go:168] "Request Body" body=""
	I1212 00:15:48.786290   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:48.786671   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:48.786725   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:48.962079   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:49.024775   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:49.024824   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:49.024842   48339 retry.go:31] will retry after 15.053349185s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:49.286133   48339 type.go:168] "Request Body" body=""
	I1212 00:15:49.286218   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:49.286771   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:49.786188   48339 type.go:168] "Request Body" body=""
	I1212 00:15:49.786266   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:49.786639   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:49.786790   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:49.873069   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:49.873108   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:49.873126   48339 retry.go:31] will retry after 17.371130847s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:50.286878   48339 type.go:168] "Request Body" body=""
	I1212 00:15:50.286961   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:50.287310   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:50.787122   48339 type.go:168] "Request Body" body=""
	I1212 00:15:50.787202   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:50.787523   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:50.787579   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:51.286912   48339 type.go:168] "Request Body" body=""
	I1212 00:15:51.286981   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:51.287298   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:51.787059   48339 type.go:168] "Request Body" body=""
	I1212 00:15:51.787135   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:51.787456   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:52.286151   48339 type.go:168] "Request Body" body=""
	I1212 00:15:52.286226   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:52.286553   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:52.786336   48339 type.go:168] "Request Body" body=""
	I1212 00:15:52.786407   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:52.786699   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:53.286548   48339 type.go:168] "Request Body" body=""
	I1212 00:15:53.286619   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:53.286939   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:53.287009   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:53.786505   48339 type.go:168] "Request Body" body=""
	I1212 00:15:53.786577   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:53.786912   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:54.286719   48339 type.go:168] "Request Body" body=""
	I1212 00:15:54.286786   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:54.287059   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:54.786836   48339 type.go:168] "Request Body" body=""
	I1212 00:15:54.786933   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:54.787274   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:55.287094   48339 type.go:168] "Request Body" body=""
	I1212 00:15:55.287171   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:55.287511   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:55.287570   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:55.786152   48339 type.go:168] "Request Body" body=""
	I1212 00:15:55.786220   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:55.786474   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:56.286213   48339 type.go:168] "Request Body" body=""
	I1212 00:15:56.286312   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:56.286631   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:56.786168   48339 type.go:168] "Request Body" body=""
	I1212 00:15:56.786239   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:56.786561   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:57.287075   48339 type.go:168] "Request Body" body=""
	I1212 00:15:57.287147   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:57.287400   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:57.787153   48339 type.go:168] "Request Body" body=""
	I1212 00:15:57.787225   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:57.787534   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:57.787585   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:58.286375   48339 type.go:168] "Request Body" body=""
	I1212 00:15:58.286450   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:58.286783   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:58.786215   48339 type.go:168] "Request Body" body=""
	I1212 00:15:58.786282   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:58.786594   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:59.286241   48339 type.go:168] "Request Body" body=""
	I1212 00:15:59.286312   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:59.286622   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:59.786317   48339 type.go:168] "Request Body" body=""
	I1212 00:15:59.786388   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:59.786719   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:00.292274   48339 type.go:168] "Request Body" body=""
	I1212 00:16:00.292358   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:00.292654   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:00.292703   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:00.786205   48339 type.go:168] "Request Body" body=""
	I1212 00:16:00.786286   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:00.786644   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:01.286348   48339 type.go:168] "Request Body" body=""
	I1212 00:16:01.286432   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:01.286773   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:01.787146   48339 type.go:168] "Request Body" body=""
	I1212 00:16:01.787221   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:01.787510   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:02.286209   48339 type.go:168] "Request Body" body=""
	I1212 00:16:02.286300   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:02.286617   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:02.786467   48339 type.go:168] "Request Body" body=""
	I1212 00:16:02.786540   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:02.786883   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:02.786938   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:03.286672   48339 type.go:168] "Request Body" body=""
	I1212 00:16:03.286737   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:03.287012   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:03.786797   48339 type.go:168] "Request Body" body=""
	I1212 00:16:03.786868   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:03.787218   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:04.078782   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:16:04.137731   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:16:04.141181   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:16:04.141215   48339 retry.go:31] will retry after 17.411337884s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:16:04.286486   48339 type.go:168] "Request Body" body=""
	I1212 00:16:04.286564   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:04.286889   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:04.786185   48339 type.go:168] "Request Body" body=""
	I1212 00:16:04.786276   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:04.786662   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:05.286261   48339 type.go:168] "Request Body" body=""
	I1212 00:16:05.286336   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:05.286651   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:05.286703   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:05.786375   48339 type.go:168] "Request Body" body=""
	I1212 00:16:05.786467   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:05.786794   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:06.286188   48339 type.go:168] "Request Body" body=""
	I1212 00:16:06.286265   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:06.286589   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:06.786260   48339 type.go:168] "Request Body" body=""
	I1212 00:16:06.786341   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:06.786641   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:07.245320   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:16:07.286783   48339 type.go:168] "Request Body" body=""
	I1212 00:16:07.286895   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:07.287194   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:07.287250   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:07.304749   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:16:07.304789   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:16:07.304807   48339 retry.go:31] will retry after 24.953429831s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:16:07.787063   48339 type.go:168] "Request Body" body=""
	I1212 00:16:07.787138   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:07.787437   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:08.286404   48339 type.go:168] "Request Body" body=""
	I1212 00:16:08.286476   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:08.286783   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:08.786218   48339 type.go:168] "Request Body" body=""
	I1212 00:16:08.786293   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:08.786671   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:09.286981   48339 type.go:168] "Request Body" body=""
	I1212 00:16:09.287066   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:09.287329   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:09.287373   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:09.787100   48339 type.go:168] "Request Body" body=""
	I1212 00:16:09.787195   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:09.787534   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:10.286235   48339 type.go:168] "Request Body" body=""
	I1212 00:16:10.286321   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:10.286701   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:10.786206   48339 type.go:168] "Request Body" body=""
	I1212 00:16:10.786294   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:10.786608   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:11.286206   48339 type.go:168] "Request Body" body=""
	I1212 00:16:11.286280   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:11.286613   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:11.786187   48339 type.go:168] "Request Body" body=""
	I1212 00:16:11.786279   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:11.786620   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:11.786679   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:12.286942   48339 type.go:168] "Request Body" body=""
	I1212 00:16:12.287031   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:12.287292   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:12.786305   48339 type.go:168] "Request Body" body=""
	I1212 00:16:12.786379   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:12.786714   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:13.286636   48339 type.go:168] "Request Body" body=""
	I1212 00:16:13.286735   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:13.287061   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:13.786837   48339 type.go:168] "Request Body" body=""
	I1212 00:16:13.786905   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:13.787175   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:13.787217   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:14.286785   48339 type.go:168] "Request Body" body=""
	I1212 00:16:14.286860   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:14.287199   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:14.786985   48339 type.go:168] "Request Body" body=""
	I1212 00:16:14.787080   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:14.787391   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:15.287017   48339 type.go:168] "Request Body" body=""
	I1212 00:16:15.287092   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:15.287365   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:15.786104   48339 type.go:168] "Request Body" body=""
	I1212 00:16:15.786194   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:15.786515   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:16.286214   48339 type.go:168] "Request Body" body=""
	I1212 00:16:16.286312   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:16.286611   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:16.286662   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:16.787101   48339 type.go:168] "Request Body" body=""
	I1212 00:16:16.787177   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:16.787436   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:17.286203   48339 type.go:168] "Request Body" body=""
	I1212 00:16:17.286282   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:17.286588   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:17.786199   48339 type.go:168] "Request Body" body=""
	I1212 00:16:17.786273   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:17.786616   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:18.286463   48339 type.go:168] "Request Body" body=""
	I1212 00:16:18.286538   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:18.286889   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:18.286938   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:18.786189   48339 type.go:168] "Request Body" body=""
	I1212 00:16:18.786282   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:18.786626   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:19.286360   48339 type.go:168] "Request Body" body=""
	I1212 00:16:19.286434   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:19.286751   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:19.786162   48339 type.go:168] "Request Body" body=""
	I1212 00:16:19.786239   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:19.786514   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:20.286225   48339 type.go:168] "Request Body" body=""
	I1212 00:16:20.286301   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:20.286620   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:20.786210   48339 type.go:168] "Request Body" body=""
	I1212 00:16:20.786283   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:20.786562   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:20.786610   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:21.286154   48339 type.go:168] "Request Body" body=""
	I1212 00:16:21.286236   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:21.286508   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:21.552920   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:16:21.609312   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:16:21.612881   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:16:21.612910   48339 retry.go:31] will retry after 24.114548677s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:16:21.786128   48339 type.go:168] "Request Body" body=""
	I1212 00:16:21.786221   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:21.786547   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:22.286255   48339 type.go:168] "Request Body" body=""
	I1212 00:16:22.286336   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:22.286677   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:22.786457   48339 type.go:168] "Request Body" body=""
	I1212 00:16:22.786525   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:22.786771   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:22.786820   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:23.286766   48339 type.go:168] "Request Body" body=""
	I1212 00:16:23.286841   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:23.287234   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:23.787069   48339 type.go:168] "Request Body" body=""
	I1212 00:16:23.787143   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:23.787481   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:24.286171   48339 type.go:168] "Request Body" body=""
	I1212 00:16:24.286252   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:24.286533   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:24.786238   48339 type.go:168] "Request Body" body=""
	I1212 00:16:24.786310   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:24.786625   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:25.286352   48339 type.go:168] "Request Body" body=""
	I1212 00:16:25.286433   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:25.286738   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:25.286790   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:25.786139   48339 type.go:168] "Request Body" body=""
	I1212 00:16:25.786227   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:25.786511   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:26.286200   48339 type.go:168] "Request Body" body=""
	I1212 00:16:26.286292   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:26.286614   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:26.786309   48339 type.go:168] "Request Body" body=""
	I1212 00:16:26.786416   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:26.786728   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:27.286245   48339 type.go:168] "Request Body" body=""
	I1212 00:16:27.286316   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:27.286597   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:27.786283   48339 type.go:168] "Request Body" body=""
	I1212 00:16:27.786355   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:27.786690   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:27.786745   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:28.286519   48339 type.go:168] "Request Body" body=""
	I1212 00:16:28.286594   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:28.286931   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:28.786692   48339 type.go:168] "Request Body" body=""
	I1212 00:16:28.786765   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:28.787040   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:29.286807   48339 type.go:168] "Request Body" body=""
	I1212 00:16:29.286879   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:29.287246   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:29.786890   48339 type.go:168] "Request Body" body=""
	I1212 00:16:29.786966   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:29.787276   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:29.787321   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:30.287063   48339 type.go:168] "Request Body" body=""
	I1212 00:16:30.287137   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:30.287393   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:30.787118   48339 type.go:168] "Request Body" body=""
	I1212 00:16:30.787201   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:30.787551   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:31.286150   48339 type.go:168] "Request Body" body=""
	I1212 00:16:31.286271   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:31.286606   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:31.786159   48339 type.go:168] "Request Body" body=""
	I1212 00:16:31.786233   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:31.786502   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:32.259311   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:16:32.286776   48339 type.go:168] "Request Body" body=""
	I1212 00:16:32.286852   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:32.287141   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:32.287191   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:32.315690   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:16:32.319144   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:16:32.319251   48339 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 00:16:32.786146   48339 type.go:168] "Request Body" body=""
	I1212 00:16:32.786230   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:32.786571   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:33.286348   48339 type.go:168] "Request Body" body=""
	I1212 00:16:33.286423   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:33.286668   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:33.786187   48339 type.go:168] "Request Body" body=""
	I1212 00:16:33.786262   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:33.786597   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:34.286351   48339 type.go:168] "Request Body" body=""
	I1212 00:16:34.286425   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:34.286777   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:34.787084   48339 type.go:168] "Request Body" body=""
	I1212 00:16:34.787156   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:34.787405   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:34.787444   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:35.286102   48339 type.go:168] "Request Body" body=""
	I1212 00:16:35.286177   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:35.286533   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:35.786215   48339 type.go:168] "Request Body" body=""
	I1212 00:16:35.786285   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:35.786632   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:36.287087   48339 type.go:168] "Request Body" body=""
	I1212 00:16:36.287160   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:36.287418   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:36.786103   48339 type.go:168] "Request Body" body=""
	I1212 00:16:36.786193   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:36.786526   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:37.286129   48339 type.go:168] "Request Body" body=""
	I1212 00:16:37.286202   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:37.286544   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:37.286600   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:37.787026   48339 type.go:168] "Request Body" body=""
	I1212 00:16:37.787100   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:37.787357   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:38.286531   48339 type.go:168] "Request Body" body=""
	I1212 00:16:38.286611   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:38.286935   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:38.786684   48339 type.go:168] "Request Body" body=""
	I1212 00:16:38.786754   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:38.787096   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:39.286816   48339 type.go:168] "Request Body" body=""
	I1212 00:16:39.286887   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:39.287147   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:39.287187   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:39.786891   48339 type.go:168] "Request Body" body=""
	I1212 00:16:39.786969   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:39.787334   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:40.287018   48339 type.go:168] "Request Body" body=""
	I1212 00:16:40.287113   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:40.287426   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:40.786868   48339 type.go:168] "Request Body" body=""
	I1212 00:16:40.786934   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:40.787251   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:41.287087   48339 type.go:168] "Request Body" body=""
	I1212 00:16:41.287180   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:41.287508   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:41.287561   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:41.786226   48339 type.go:168] "Request Body" body=""
	I1212 00:16:41.786304   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:41.786661   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:42.286381   48339 type.go:168] "Request Body" body=""
	I1212 00:16:42.286463   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:42.286744   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:42.786456   48339 type.go:168] "Request Body" body=""
	I1212 00:16:42.786532   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:42.786873   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:43.286753   48339 type.go:168] "Request Body" body=""
	I1212 00:16:43.286834   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:43.287195   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:43.786972   48339 type.go:168] "Request Body" body=""
	I1212 00:16:43.787061   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:43.787340   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:43.787388   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:44.287150   48339 type.go:168] "Request Body" body=""
	I1212 00:16:44.287228   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:44.287570   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:44.786168   48339 type.go:168] "Request Body" body=""
	I1212 00:16:44.786239   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:44.786580   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:45.286154   48339 type.go:168] "Request Body" body=""
	I1212 00:16:45.286221   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:45.286507   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:45.728277   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:16:45.786458   48339 type.go:168] "Request Body" body=""
	I1212 00:16:45.786536   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:45.786800   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:45.788347   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:16:45.788381   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:16:45.788458   48339 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 00:16:45.791789   48339 out.go:179] * Enabled addons: 
	I1212 00:16:45.795459   48339 addons.go:530] duration metric: took 1m33.015656607s for enable addons: enabled=[]
	I1212 00:16:46.287010   48339 type.go:168] "Request Body" body=""
	I1212 00:16:46.287081   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:46.287404   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:46.287462   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:46.786103   48339 type.go:168] "Request Body" body=""
	I1212 00:16:46.786175   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:46.786467   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:47.286190   48339 type.go:168] "Request Body" body=""
	I1212 00:16:47.286259   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:47.286575   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:47.786211   48339 type.go:168] "Request Body" body=""
	I1212 00:16:47.786307   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:47.786638   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:48.286474   48339 type.go:168] "Request Body" body=""
	I1212 00:16:48.286546   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:48.286806   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:48.786468   48339 type.go:168] "Request Body" body=""
	I1212 00:16:48.786549   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:48.786891   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:48.786943   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:49.286477   48339 type.go:168] "Request Body" body=""
	I1212 00:16:49.286551   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:49.286848   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:49.786149   48339 type.go:168] "Request Body" body=""
	I1212 00:16:49.786225   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:49.786558   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:50.286220   48339 type.go:168] "Request Body" body=""
	I1212 00:16:50.286298   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:50.286632   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:50.786373   48339 type.go:168] "Request Body" body=""
	I1212 00:16:50.786482   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:50.786811   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:51.287106   48339 type.go:168] "Request Body" body=""
	I1212 00:16:51.287186   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:51.287452   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:51.287504   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:51.786168   48339 type.go:168] "Request Body" body=""
	I1212 00:16:51.786246   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:51.786652   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:52.286230   48339 type.go:168] "Request Body" body=""
	I1212 00:16:52.286316   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:52.286605   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:52.786447   48339 type.go:168] "Request Body" body=""
	I1212 00:16:52.786524   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:52.786794   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:53.286795   48339 type.go:168] "Request Body" body=""
	I1212 00:16:53.286881   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:53.287250   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:53.786893   48339 type.go:168] "Request Body" body=""
	I1212 00:16:53.786965   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:53.787310   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:53.787368   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:54.287009   48339 type.go:168] "Request Body" body=""
	I1212 00:16:54.287074   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:54.287399   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:54.787136   48339 type.go:168] "Request Body" body=""
	I1212 00:16:54.787210   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:54.787556   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:55.286167   48339 type.go:168] "Request Body" body=""
	I1212 00:16:55.286259   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:55.286627   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:55.787083   48339 type.go:168] "Request Body" body=""
	I1212 00:16:55.787159   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:55.787400   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:55.787438   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:56.286110   48339 type.go:168] "Request Body" body=""
	I1212 00:16:56.286192   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:56.286559   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:56.786120   48339 type.go:168] "Request Body" body=""
	I1212 00:16:56.786194   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:56.786507   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:57.286159   48339 type.go:168] "Request Body" body=""
	I1212 00:16:57.286235   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:57.286541   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:57.786199   48339 type.go:168] "Request Body" body=""
	I1212 00:16:57.786273   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:57.786608   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:58.286384   48339 type.go:168] "Request Body" body=""
	I1212 00:16:58.286456   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:58.286786   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:58.286842   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:58.786113   48339 type.go:168] "Request Body" body=""
	I1212 00:16:58.786195   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:58.786436   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:59.286107   48339 type.go:168] "Request Body" body=""
	I1212 00:16:59.286184   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:59.286539   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:59.786120   48339 type.go:168] "Request Body" body=""
	I1212 00:16:59.786208   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:59.786557   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:00.301305   48339 type.go:168] "Request Body" body=""
	I1212 00:17:00.301394   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:00.301705   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:00.301755   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:00.786915   48339 type.go:168] "Request Body" body=""
	I1212 00:17:00.787023   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:00.787365   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:01.286116   48339 type.go:168] "Request Body" body=""
	I1212 00:17:01.286201   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:01.286498   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:01.786331   48339 type.go:168] "Request Body" body=""
	I1212 00:17:01.786455   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:01.787063   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:02.286594   48339 type.go:168] "Request Body" body=""
	I1212 00:17:02.286683   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:02.287073   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:02.786476   48339 type.go:168] "Request Body" body=""
	I1212 00:17:02.786554   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:02.786843   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:02.786901   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:03.286851   48339 type.go:168] "Request Body" body=""
	I1212 00:17:03.286949   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:03.287380   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:03.787098   48339 type.go:168] "Request Body" body=""
	I1212 00:17:03.787174   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:03.787557   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:04.286231   48339 type.go:168] "Request Body" body=""
	I1212 00:17:04.286326   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:04.286645   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:04.786397   48339 type.go:168] "Request Body" body=""
	I1212 00:17:04.786491   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:04.786849   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:05.286553   48339 type.go:168] "Request Body" body=""
	I1212 00:17:05.286637   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:05.286984   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:05.287068   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:05.786188   48339 type.go:168] "Request Body" body=""
	I1212 00:17:05.786271   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:05.786551   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:06.286285   48339 type.go:168] "Request Body" body=""
	I1212 00:17:06.286367   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:06.286754   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:06.786511   48339 type.go:168] "Request Body" body=""
	I1212 00:17:06.786601   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:06.786964   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:07.286708   48339 type.go:168] "Request Body" body=""
	I1212 00:17:07.286779   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:07.287068   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:07.287118   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:07.786821   48339 type.go:168] "Request Body" body=""
	I1212 00:17:07.786901   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:07.787214   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:08.286844   48339 type.go:168] "Request Body" body=""
	I1212 00:17:08.286917   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:08.288380   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1212 00:17:08.786934   48339 type.go:168] "Request Body" body=""
	I1212 00:17:08.787026   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:08.787269   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:09.287048   48339 type.go:168] "Request Body" body=""
	I1212 00:17:09.287121   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:09.287442   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:09.287495   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:09.786147   48339 type.go:168] "Request Body" body=""
	I1212 00:17:09.786230   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:09.786586   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:10.286883   48339 type.go:168] "Request Body" body=""
	I1212 00:17:10.286956   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:10.287243   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:10.786958   48339 type.go:168] "Request Body" body=""
	I1212 00:17:10.787045   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:10.787380   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:11.287044   48339 type.go:168] "Request Body" body=""
	I1212 00:17:11.287119   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:11.287444   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:11.786125   48339 type.go:168] "Request Body" body=""
	I1212 00:17:11.786193   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:11.786444   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:11.786489   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:12.286149   48339 type.go:168] "Request Body" body=""
	I1212 00:17:12.286229   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:12.286580   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:12.786352   48339 type.go:168] "Request Body" body=""
	I1212 00:17:12.786428   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:12.786688   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:13.286596   48339 type.go:168] "Request Body" body=""
	I1212 00:17:13.286663   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:13.286919   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:13.786173   48339 type.go:168] "Request Body" body=""
	I1212 00:17:13.786241   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:13.786564   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:13.786616   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:14.286270   48339 type.go:168] "Request Body" body=""
	I1212 00:17:14.286348   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:14.286675   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:14.786352   48339 type.go:168] "Request Body" body=""
	I1212 00:17:14.786428   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:14.786687   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:15.286227   48339 type.go:168] "Request Body" body=""
	I1212 00:17:15.286303   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:15.286628   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:15.786172   48339 type.go:168] "Request Body" body=""
	I1212 00:17:15.786249   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:15.786573   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:16.286101   48339 type.go:168] "Request Body" body=""
	I1212 00:17:16.286166   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:16.286405   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:16.286442   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:16.786141   48339 type.go:168] "Request Body" body=""
	I1212 00:17:16.786209   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:16.786499   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:17.286249   48339 type.go:168] "Request Body" body=""
	I1212 00:17:17.286330   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:17.286684   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:17.786983   48339 type.go:168] "Request Body" body=""
	I1212 00:17:17.787073   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:17.787361   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:18.286161   48339 type.go:168] "Request Body" body=""
	I1212 00:17:18.286235   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:18.286595   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:18.286655   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:18.786181   48339 type.go:168] "Request Body" body=""
	I1212 00:17:18.786265   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:18.786618   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:19.286155   48339 type.go:168] "Request Body" body=""
	I1212 00:17:19.286234   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:19.286527   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:19.786171   48339 type.go:168] "Request Body" body=""
	I1212 00:17:19.786268   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:19.786580   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:20.286265   48339 type.go:168] "Request Body" body=""
	I1212 00:17:20.286345   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:20.286667   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:20.286729   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:20.786253   48339 type.go:168] "Request Body" body=""
	I1212 00:17:20.786335   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:20.786585   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:21.286248   48339 type.go:168] "Request Body" body=""
	I1212 00:17:21.286349   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:21.286645   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:21.786360   48339 type.go:168] "Request Body" body=""
	I1212 00:17:21.786432   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:21.786770   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:22.286447   48339 type.go:168] "Request Body" body=""
	I1212 00:17:22.286522   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:22.286821   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:22.286872   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:22.786477   48339 type.go:168] "Request Body" body=""
	I1212 00:17:22.786551   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:22.786870   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:23.286630   48339 type.go:168] "Request Body" body=""
	I1212 00:17:23.286708   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:23.287045   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:23.786799   48339 type.go:168] "Request Body" body=""
	I1212 00:17:23.786866   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:23.787137   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:24.286984   48339 type.go:168] "Request Body" body=""
	I1212 00:17:24.287110   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:24.287379   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:24.287422   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:24.787166   48339 type.go:168] "Request Body" body=""
	I1212 00:17:24.787236   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:24.787551   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:25.286131   48339 type.go:168] "Request Body" body=""
	I1212 00:17:25.286198   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:25.286515   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:25.786175   48339 type.go:168] "Request Body" body=""
	I1212 00:17:25.786258   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:25.786585   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:26.286285   48339 type.go:168] "Request Body" body=""
	I1212 00:17:26.286371   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:26.286713   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:26.786399   48339 type.go:168] "Request Body" body=""
	I1212 00:17:26.786473   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:26.786722   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:26.786769   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:27.286222   48339 type.go:168] "Request Body" body=""
	I1212 00:17:27.286300   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:27.286683   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:27.786241   48339 type.go:168] "Request Body" body=""
	I1212 00:17:27.786319   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:27.786666   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:28.286480   48339 type.go:168] "Request Body" body=""
	I1212 00:17:28.286553   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:28.286814   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:28.786177   48339 type.go:168] "Request Body" body=""
	I1212 00:17:28.786245   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:28.786593   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:29.286294   48339 type.go:168] "Request Body" body=""
	I1212 00:17:29.286373   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:29.286698   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:29.286749   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:29.787059   48339 type.go:168] "Request Body" body=""
	I1212 00:17:29.787135   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:29.787388   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:30.287153   48339 type.go:168] "Request Body" body=""
	I1212 00:17:30.287233   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:30.287571   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:30.786130   48339 type.go:168] "Request Body" body=""
	I1212 00:17:30.786208   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:30.786533   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:31.286233   48339 type.go:168] "Request Body" body=""
	I1212 00:17:31.286304   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:31.286552   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:31.786198   48339 type.go:168] "Request Body" body=""
	I1212 00:17:31.786272   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:31.786658   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:31.786711   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:32.286228   48339 type.go:168] "Request Body" body=""
	I1212 00:17:32.286302   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:32.286672   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:32.786431   48339 type.go:168] "Request Body" body=""
	I1212 00:17:32.786501   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:32.786749   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:33.286661   48339 type.go:168] "Request Body" body=""
	I1212 00:17:33.286739   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:33.287070   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:33.786835   48339 type.go:168] "Request Body" body=""
	I1212 00:17:33.786916   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:33.787267   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:33.787323   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:34.287052   48339 type.go:168] "Request Body" body=""
	I1212 00:17:34.287118   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:34.287368   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:34.787078   48339 type.go:168] "Request Body" body=""
	I1212 00:17:34.787151   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:34.787466   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:35.286168   48339 type.go:168] "Request Body" body=""
	I1212 00:17:35.286247   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:35.286575   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:35.787150   48339 type.go:168] "Request Body" body=""
	I1212 00:17:35.787215   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:35.787459   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:35.787500   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:36.286166   48339 type.go:168] "Request Body" body=""
	I1212 00:17:36.286238   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:36.286556   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:36.786086   48339 type.go:168] "Request Body" body=""
	I1212 00:17:36.786158   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:36.786493   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:37.287083   48339 type.go:168] "Request Body" body=""
	I1212 00:17:37.287149   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:37.287400   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:37.786113   48339 type.go:168] "Request Body" body=""
	I1212 00:17:37.786187   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:37.786493   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:38.286449   48339 type.go:168] "Request Body" body=""
	I1212 00:17:38.286532   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:38.286863   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:38.286918   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:38.786428   48339 type.go:168] "Request Body" body=""
	I1212 00:17:38.786493   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:38.786739   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:39.286231   48339 type.go:168] "Request Body" body=""
	I1212 00:17:39.286328   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:39.286669   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:39.786191   48339 type.go:168] "Request Body" body=""
	I1212 00:17:39.786261   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:39.786574   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:40.286097   48339 type.go:168] "Request Body" body=""
	I1212 00:17:40.286176   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:40.286475   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:40.786241   48339 type.go:168] "Request Body" body=""
	I1212 00:17:40.786314   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:40.786667   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:40.786722   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:41.286407   48339 type.go:168] "Request Body" body=""
	I1212 00:17:41.286483   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:41.286793   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:41.786385   48339 type.go:168] "Request Body" body=""
	I1212 00:17:41.786504   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:41.786782   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:42.286235   48339 type.go:168] "Request Body" body=""
	I1212 00:17:42.286314   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:42.286656   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:42.786517   48339 type.go:168] "Request Body" body=""
	I1212 00:17:42.786601   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:42.786955   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:42.787030   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:43.286740   48339 type.go:168] "Request Body" body=""
	I1212 00:17:43.286811   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:43.287101   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:43.786897   48339 type.go:168] "Request Body" body=""
	I1212 00:17:43.786970   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:43.787283   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:44.287080   48339 type.go:168] "Request Body" body=""
	I1212 00:17:44.287151   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:44.287449   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:44.786108   48339 type.go:168] "Request Body" body=""
	I1212 00:17:44.786188   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:44.786505   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:45.286236   48339 type.go:168] "Request Body" body=""
	I1212 00:17:45.286337   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:45.286642   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:45.286697   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:45.786185   48339 type.go:168] "Request Body" body=""
	I1212 00:17:45.786259   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:45.786591   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:46.286218   48339 type.go:168] "Request Body" body=""
	I1212 00:17:46.286326   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:46.286644   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:46.786217   48339 type.go:168] "Request Body" body=""
	I1212 00:17:46.786289   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:46.786616   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:47.286247   48339 type.go:168] "Request Body" body=""
	I1212 00:17:47.286340   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:47.286709   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:47.286768   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:47.786186   48339 type.go:168] "Request Body" body=""
	I1212 00:17:47.786263   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:47.786590   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:48.286576   48339 type.go:168] "Request Body" body=""
	I1212 00:17:48.286657   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:48.287040   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:48.786801   48339 type.go:168] "Request Body" body=""
	I1212 00:17:48.786875   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:48.787271   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:49.287049   48339 type.go:168] "Request Body" body=""
	I1212 00:17:49.287121   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:49.287376   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:49.287415   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:49.786455   48339 type.go:168] "Request Body" body=""
	I1212 00:17:49.786542   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:49.786946   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:50.286236   48339 type.go:168] "Request Body" body=""
	I1212 00:17:50.286337   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:50.286768   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:50.787077   48339 type.go:168] "Request Body" body=""
	I1212 00:17:50.787161   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:50.787441   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:51.286171   48339 type.go:168] "Request Body" body=""
	I1212 00:17:51.286244   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:51.286582   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:51.786669   48339 type.go:168] "Request Body" body=""
	I1212 00:17:51.786740   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:51.787072   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:51.787128   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:52.286721   48339 type.go:168] "Request Body" body=""
	I1212 00:17:52.286792   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:52.287074   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:52.787058   48339 type.go:168] "Request Body" body=""
	I1212 00:17:52.787135   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:52.787466   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:53.286395   48339 type.go:168] "Request Body" body=""
	I1212 00:17:53.286475   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:53.286789   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:53.786172   48339 type.go:168] "Request Body" body=""
	I1212 00:17:53.786242   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:53.786578   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:54.286214   48339 type.go:168] "Request Body" body=""
	I1212 00:17:54.286284   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:54.286634   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:54.286688   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:54.786336   48339 type.go:168] "Request Body" body=""
	I1212 00:17:54.786415   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:54.786747   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:55.287099   48339 type.go:168] "Request Body" body=""
	I1212 00:17:55.287165   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:55.287421   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:55.787184   48339 type.go:168] "Request Body" body=""
	I1212 00:17:55.787260   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:55.787579   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:56.286208   48339 type.go:168] "Request Body" body=""
	I1212 00:17:56.286283   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:56.286616   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:56.786874   48339 type.go:168] "Request Body" body=""
	I1212 00:17:56.786946   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:56.787207   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:56.787260   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:57.286785   48339 type.go:168] "Request Body" body=""
	I1212 00:17:57.286872   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:57.287249   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:57.786907   48339 type.go:168] "Request Body" body=""
	I1212 00:17:57.786979   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:57.787325   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:58.287084   48339 type.go:168] "Request Body" body=""
	I1212 00:17:58.287156   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:58.287408   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:58.787171   48339 type.go:168] "Request Body" body=""
	I1212 00:17:58.787247   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:58.787569   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:58.787624   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:59.286220   48339 type.go:168] "Request Body" body=""
	I1212 00:17:59.286295   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:59.286637   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:59.786160   48339 type.go:168] "Request Body" body=""
	I1212 00:17:59.786226   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:59.786481   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:00.286342   48339 type.go:168] "Request Body" body=""
	I1212 00:18:00.286424   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:00.286745   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:00.786411   48339 type.go:168] "Request Body" body=""
	I1212 00:18:00.786487   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:00.786799   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:01.286485   48339 type.go:168] "Request Body" body=""
	I1212 00:18:01.286554   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:01.286822   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:01.286864   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:01.786179   48339 type.go:168] "Request Body" body=""
	I1212 00:18:01.786249   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:01.786559   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:02.286272   48339 type.go:168] "Request Body" body=""
	I1212 00:18:02.286352   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:02.286681   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:02.786397   48339 type.go:168] "Request Body" body=""
	I1212 00:18:02.786473   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:02.786729   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:03.286684   48339 type.go:168] "Request Body" body=""
	I1212 00:18:03.286756   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:03.287062   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:03.287108   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:03.786757   48339 type.go:168] "Request Body" body=""
	I1212 00:18:03.786848   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:03.787220   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:04.286890   48339 type.go:168] "Request Body" body=""
	I1212 00:18:04.286971   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:04.287276   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:04.787022   48339 type.go:168] "Request Body" body=""
	I1212 00:18:04.787101   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:04.787413   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:05.286164   48339 type.go:168] "Request Body" body=""
	I1212 00:18:05.286245   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:05.286589   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:05.786893   48339 type.go:168] "Request Body" body=""
	I1212 00:18:05.786965   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:05.787232   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:05.787272   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:06.287096   48339 type.go:168] "Request Body" body=""
	I1212 00:18:06.287189   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:06.287596   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:06.786293   48339 type.go:168] "Request Body" body=""
	I1212 00:18:06.786366   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:06.786687   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:07.286878   48339 type.go:168] "Request Body" body=""
	I1212 00:18:07.286943   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:07.287205   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:07.786915   48339 type.go:168] "Request Body" body=""
	I1212 00:18:07.786985   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:07.787328   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:07.787380   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:08.286823   48339 type.go:168] "Request Body" body=""
	I1212 00:18:08.286912   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:08.287273   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:08.786885   48339 type.go:168] "Request Body" body=""
	I1212 00:18:08.786957   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:08.787238   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:09.286257   48339 type.go:168] "Request Body" body=""
	I1212 00:18:09.286349   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:09.286656   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:09.786355   48339 type.go:168] "Request Body" body=""
	I1212 00:18:09.786440   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:09.786773   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:10.286467   48339 type.go:168] "Request Body" body=""
	I1212 00:18:10.286571   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:10.286828   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:10.286869   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:10.786201   48339 type.go:168] "Request Body" body=""
	I1212 00:18:10.786280   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:10.786615   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:11.286318   48339 type.go:168] "Request Body" body=""
	I1212 00:18:11.286395   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:11.286719   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:11.786408   48339 type.go:168] "Request Body" body=""
	I1212 00:18:11.786479   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:11.786752   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:12.286228   48339 type.go:168] "Request Body" body=""
	I1212 00:18:12.286305   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:12.286693   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:12.786445   48339 type.go:168] "Request Body" body=""
	I1212 00:18:12.786529   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:12.786847   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:12.786901   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:13.286863   48339 type.go:168] "Request Body" body=""
	I1212 00:18:13.286936   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:13.287242   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:13.786941   48339 type.go:168] "Request Body" body=""
	I1212 00:18:13.787040   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:13.787410   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:14.287037   48339 type.go:168] "Request Body" body=""
	I1212 00:18:14.287114   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:14.287432   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:14.786139   48339 type.go:168] "Request Body" body=""
	I1212 00:18:14.786211   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:14.786471   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:15.286165   48339 type.go:168] "Request Body" body=""
	I1212 00:18:15.286243   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:15.286559   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:15.286619   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:15.786275   48339 type.go:168] "Request Body" body=""
	I1212 00:18:15.786355   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:15.786707   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:16.286358   48339 type.go:168] "Request Body" body=""
	I1212 00:18:16.286435   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:16.286754   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:16.786213   48339 type.go:168] "Request Body" body=""
	I1212 00:18:16.786285   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:16.786626   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:17.286235   48339 type.go:168] "Request Body" body=""
	I1212 00:18:17.286316   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:17.286711   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:17.286765   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:17.786210   48339 type.go:168] "Request Body" body=""
	I1212 00:18:17.786299   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:17.786594   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:18.286667   48339 type.go:168] "Request Body" body=""
	I1212 00:18:18.286745   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:18.287093   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:18.786871   48339 type.go:168] "Request Body" body=""
	I1212 00:18:18.786957   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:18.787347   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:19.287118   48339 type.go:168] "Request Body" body=""
	I1212 00:18:19.287189   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:19.287538   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:19.287598   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:19.786173   48339 type.go:168] "Request Body" body=""
	I1212 00:18:19.786250   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:19.786591   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:20.286290   48339 type.go:168] "Request Body" body=""
	I1212 00:18:20.286368   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:20.286732   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:20.786421   48339 type.go:168] "Request Body" body=""
	I1212 00:18:20.786496   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:20.786769   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:21.286229   48339 type.go:168] "Request Body" body=""
	I1212 00:18:21.286299   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:21.286631   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:21.786238   48339 type.go:168] "Request Body" body=""
	I1212 00:18:21.786325   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:21.786704   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:21.786756   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:22.286201   48339 type.go:168] "Request Body" body=""
	I1212 00:18:22.286267   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:22.286513   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:22.786439   48339 type.go:168] "Request Body" body=""
	I1212 00:18:22.786511   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:22.786820   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:23.286747   48339 type.go:168] "Request Body" body=""
	I1212 00:18:23.286828   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:23.287136   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:23.786886   48339 type.go:168] "Request Body" body=""
	I1212 00:18:23.786958   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:23.787219   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:23.787272   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:24.287069   48339 type.go:168] "Request Body" body=""
	I1212 00:18:24.287145   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:24.287464   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:24.787101   48339 type.go:168] "Request Body" body=""
	I1212 00:18:24.787205   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:24.787503   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:25.286157   48339 type.go:168] "Request Body" body=""
	I1212 00:18:25.286231   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:25.286484   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:25.786179   48339 type.go:168] "Request Body" body=""
	I1212 00:18:25.786250   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:25.786581   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:26.286254   48339 type.go:168] "Request Body" body=""
	I1212 00:18:26.286329   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:26.286638   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:26.286693   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:26.786132   48339 type.go:168] "Request Body" body=""
	I1212 00:18:26.786199   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:26.786452   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:27.286167   48339 type.go:168] "Request Body" body=""
	I1212 00:18:27.286240   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:27.286520   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:27.786148   48339 type.go:168] "Request Body" body=""
	I1212 00:18:27.786250   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:27.786565   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:28.286477   48339 type.go:168] "Request Body" body=""
	I1212 00:18:28.286544   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:28.286801   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:28.286842   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:28.786195   48339 type.go:168] "Request Body" body=""
	I1212 00:18:28.786269   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:28.786563   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:29.286151   48339 type.go:168] "Request Body" body=""
	I1212 00:18:29.286228   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:29.286541   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:29.786783   48339 type.go:168] "Request Body" body=""
	I1212 00:18:29.786859   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:29.787122   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:30.286875   48339 type.go:168] "Request Body" body=""
	I1212 00:18:30.286953   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:30.287291   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:30.287342   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:30.786921   48339 type.go:168] "Request Body" body=""
	I1212 00:18:30.787054   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:30.787386   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:31.287040   48339 type.go:168] "Request Body" body=""
	I1212 00:18:31.287113   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:31.287420   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:31.786111   48339 type.go:168] "Request Body" body=""
	I1212 00:18:31.786190   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:31.786534   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:32.286242   48339 type.go:168] "Request Body" body=""
	I1212 00:18:32.286317   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:32.286644   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:32.787099   48339 type.go:168] "Request Body" body=""
	I1212 00:18:32.787169   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:32.787444   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:32.787485   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:33.286455   48339 type.go:168] "Request Body" body=""
	I1212 00:18:33.286531   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:33.286867   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:33.786185   48339 type.go:168] "Request Body" body=""
	I1212 00:18:33.786263   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:33.786599   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:34.287030   48339 type.go:168] "Request Body" body=""
	I1212 00:18:34.287101   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:34.287356   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:34.787107   48339 type.go:168] "Request Body" body=""
	I1212 00:18:34.787178   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:34.787462   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:34.787506   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:35.287151   48339 type.go:168] "Request Body" body=""
	I1212 00:18:35.287227   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:35.287561   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:35.786156   48339 type.go:168] "Request Body" body=""
	I1212 00:18:35.786227   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:35.786476   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:36.286225   48339 type.go:168] "Request Body" body=""
	I1212 00:18:36.286302   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:36.286658   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:36.786364   48339 type.go:168] "Request Body" body=""
	I1212 00:18:36.786441   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:36.786776   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:37.287081   48339 type.go:168] "Request Body" body=""
	I1212 00:18:37.287160   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:37.287429   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:37.287479   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:37.786116   48339 type.go:168] "Request Body" body=""
	I1212 00:18:37.786189   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:37.786493   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:38.286438   48339 type.go:168] "Request Body" body=""
	I1212 00:18:38.286517   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:38.286835   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:38.786180   48339 type.go:168] "Request Body" body=""
	I1212 00:18:38.786274   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:38.786571   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:39.286206   48339 type.go:168] "Request Body" body=""
	I1212 00:18:39.286282   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:39.286612   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:39.786192   48339 type.go:168] "Request Body" body=""
	I1212 00:18:39.786279   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:39.786630   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:39.786682   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:40.286354   48339 type.go:168] "Request Body" body=""
	I1212 00:18:40.286444   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:40.286835   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:40.786204   48339 type.go:168] "Request Body" body=""
	I1212 00:18:40.786287   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:40.786601   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:41.286231   48339 type.go:168] "Request Body" body=""
	I1212 00:18:41.286307   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:41.286653   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:41.786899   48339 type.go:168] "Request Body" body=""
	I1212 00:18:41.787023   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:41.787291   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:41.787331   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:42.287103   48339 type.go:168] "Request Body" body=""
	I1212 00:18:42.287183   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:42.287534   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:42.786413   48339 type.go:168] "Request Body" body=""
	I1212 00:18:42.786496   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:42.786838   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:43.286712   48339 type.go:168] "Request Body" body=""
	I1212 00:18:43.286788   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:43.287076   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:43.786845   48339 type.go:168] "Request Body" body=""
	I1212 00:18:43.786921   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:43.787255   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:44.287058   48339 type.go:168] "Request Body" body=""
	I1212 00:18:44.287145   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:44.287474   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:44.287531   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:44.786152   48339 type.go:168] "Request Body" body=""
	I1212 00:18:44.786226   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:44.786558   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:45.286226   48339 type.go:168] "Request Body" body=""
	I1212 00:18:45.286300   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:45.286609   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:45.786194   48339 type.go:168] "Request Body" body=""
	I1212 00:18:45.786265   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:45.786613   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:46.287075   48339 type.go:168] "Request Body" body=""
	I1212 00:18:46.287143   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:46.287427   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:46.786109   48339 type.go:168] "Request Body" body=""
	I1212 00:18:46.786181   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:46.786497   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:46.786555   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:47.286231   48339 type.go:168] "Request Body" body=""
	I1212 00:18:47.286325   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:47.286637   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:47.786326   48339 type.go:168] "Request Body" body=""
	I1212 00:18:47.786398   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:47.786701   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:48.286663   48339 type.go:168] "Request Body" body=""
	I1212 00:18:48.286736   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:48.287070   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:48.786872   48339 type.go:168] "Request Body" body=""
	I1212 00:18:48.786951   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:48.787298   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:48.787351   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:49.287060   48339 type.go:168] "Request Body" body=""
	I1212 00:18:49.287138   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:49.287405   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:49.786097   48339 type.go:168] "Request Body" body=""
	I1212 00:18:49.786175   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:49.786470   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:50.286223   48339 type.go:168] "Request Body" body=""
	I1212 00:18:50.286298   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:50.286634   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:50.786914   48339 type.go:168] "Request Body" body=""
	I1212 00:18:50.786986   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:50.787320   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:50.787380   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:51.287127   48339 type.go:168] "Request Body" body=""
	I1212 00:18:51.287204   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:51.287530   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:51.786096   48339 type.go:168] "Request Body" body=""
	I1212 00:18:51.786170   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:51.786513   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:52.286949   48339 type.go:168] "Request Body" body=""
	I1212 00:18:52.287031   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:52.287290   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:52.786331   48339 type.go:168] "Request Body" body=""
	I1212 00:18:52.786411   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:52.786755   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:53.286620   48339 type.go:168] "Request Body" body=""
	I1212 00:18:53.286694   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:53.287034   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:53.287095   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:53.786805   48339 type.go:168] "Request Body" body=""
	I1212 00:18:53.786875   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:53.787154   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:54.286914   48339 type.go:168] "Request Body" body=""
	I1212 00:18:54.286986   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:54.287311   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:54.787067   48339 type.go:168] "Request Body" body=""
	I1212 00:18:54.787140   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:54.787481   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:55.287095   48339 type.go:168] "Request Body" body=""
	I1212 00:18:55.287162   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:55.287415   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:55.287454   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:55.786091   48339 type.go:168] "Request Body" body=""
	I1212 00:18:55.786159   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:55.786468   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:56.286160   48339 type.go:168] "Request Body" body=""
	I1212 00:18:56.286232   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:56.286551   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:56.786800   48339 type.go:168] "Request Body" body=""
	I1212 00:18:56.786866   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:56.787137   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:57.286892   48339 type.go:168] "Request Body" body=""
	I1212 00:18:57.286971   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:57.287328   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:57.787142   48339 type.go:168] "Request Body" body=""
	I1212 00:18:57.787233   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:57.787583   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:57.787634   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:58.286388   48339 type.go:168] "Request Body" body=""
	I1212 00:18:58.286461   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:58.286718   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:58.786377   48339 type.go:168] "Request Body" body=""
	I1212 00:18:58.786448   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:58.786805   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:59.286505   48339 type.go:168] "Request Body" body=""
	I1212 00:18:59.286587   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:59.286890   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:59.786144   48339 type.go:168] "Request Body" body=""
	I1212 00:18:59.786225   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:59.786592   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:00.286264   48339 type.go:168] "Request Body" body=""
	I1212 00:19:00.286345   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:00.286656   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:00.286735   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:00.786383   48339 type.go:168] "Request Body" body=""
	I1212 00:19:00.786458   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:00.786791   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:01.286179   48339 type.go:168] "Request Body" body=""
	I1212 00:19:01.286250   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:01.286584   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:01.786253   48339 type.go:168] "Request Body" body=""
	I1212 00:19:01.786329   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:01.786671   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:02.286241   48339 type.go:168] "Request Body" body=""
	I1212 00:19:02.286317   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:02.286672   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:02.786408   48339 type.go:168] "Request Body" body=""
	I1212 00:19:02.786476   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:02.786723   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:02.786763   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:03.286700   48339 type.go:168] "Request Body" body=""
	I1212 00:19:03.286795   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:03.287188   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:03.787019   48339 type.go:168] "Request Body" body=""
	I1212 00:19:03.787097   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:03.787433   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:04.286096   48339 type.go:168] "Request Body" body=""
	I1212 00:19:04.286175   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:04.286490   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:04.786196   48339 type.go:168] "Request Body" body=""
	I1212 00:19:04.786274   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:04.786601   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:05.286296   48339 type.go:168] "Request Body" body=""
	I1212 00:19:05.286371   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:05.286696   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:05.286753   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:05.786175   48339 type.go:168] "Request Body" body=""
	I1212 00:19:05.786254   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:05.786601   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:06.286229   48339 type.go:168] "Request Body" body=""
	I1212 00:19:06.286306   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:06.286600   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:06.786191   48339 type.go:168] "Request Body" body=""
	I1212 00:19:06.786264   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:06.786580   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:07.286119   48339 type.go:168] "Request Body" body=""
	I1212 00:19:07.286199   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:07.286473   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:07.786186   48339 type.go:168] "Request Body" body=""
	I1212 00:19:07.786259   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:07.786536   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:07.786581   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:08.286383   48339 type.go:168] "Request Body" body=""
	I1212 00:19:08.286463   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:08.286917   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:08.786174   48339 type.go:168] "Request Body" body=""
	I1212 00:19:08.786248   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:08.786551   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:09.286220   48339 type.go:168] "Request Body" body=""
	I1212 00:19:09.286299   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:09.286639   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:09.786169   48339 type.go:168] "Request Body" body=""
	I1212 00:19:09.786240   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:09.786540   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:10.286846   48339 type.go:168] "Request Body" body=""
	I1212 00:19:10.286915   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:10.287189   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:10.287228   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:10.787020   48339 type.go:168] "Request Body" body=""
	I1212 00:19:10.787096   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:10.787416   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:11.286118   48339 type.go:168] "Request Body" body=""
	I1212 00:19:11.286193   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:11.286517   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:11.786150   48339 type.go:168] "Request Body" body=""
	I1212 00:19:11.786231   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:11.786516   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:12.286197   48339 type.go:168] "Request Body" body=""
	I1212 00:19:12.286271   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:12.286598   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:12.786359   48339 type.go:168] "Request Body" body=""
	I1212 00:19:12.786434   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:12.786739   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:12.786787   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:13.286561   48339 type.go:168] "Request Body" body=""
	I1212 00:19:13.286637   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:13.286885   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:13.786215   48339 type.go:168] "Request Body" body=""
	I1212 00:19:13.786291   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:13.786637   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:14.286215   48339 type.go:168] "Request Body" body=""
	I1212 00:19:14.286287   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:14.286589   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:14.786851   48339 type.go:168] "Request Body" body=""
	I1212 00:19:14.786918   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:14.787262   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:14.787320   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:15.287090   48339 type.go:168] "Request Body" body=""
	I1212 00:19:15.287165   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:15.287490   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:15.786172   48339 type.go:168] "Request Body" body=""
	I1212 00:19:15.786269   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:15.786579   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:16.286136   48339 type.go:168] "Request Body" body=""
	I1212 00:19:16.286210   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:16.286453   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:16.786222   48339 type.go:168] "Request Body" body=""
	I1212 00:19:16.786299   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:16.786659   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:17.286375   48339 type.go:168] "Request Body" body=""
	I1212 00:19:17.286453   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:17.286795   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:17.286857   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:17.786163   48339 type.go:168] "Request Body" body=""
	I1212 00:19:17.786237   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:17.786560   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:18.286451   48339 type.go:168] "Request Body" body=""
	I1212 00:19:18.286531   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:18.286856   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:18.786177   48339 type.go:168] "Request Body" body=""
	I1212 00:19:18.786251   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:18.786557   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:19.286160   48339 type.go:168] "Request Body" body=""
	I1212 00:19:19.286232   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:19.286485   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:19.786192   48339 type.go:168] "Request Body" body=""
	I1212 00:19:19.786263   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:19.786567   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:19.786614   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:20.286287   48339 type.go:168] "Request Body" body=""
	I1212 00:19:20.286370   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:20.286718   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:20.787029   48339 type.go:168] "Request Body" body=""
	I1212 00:19:20.787097   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:20.787342   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:21.287119   48339 type.go:168] "Request Body" body=""
	I1212 00:19:21.287198   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:21.287505   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:21.786177   48339 type.go:168] "Request Body" body=""
	I1212 00:19:21.786266   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:21.786579   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:22.287046   48339 type.go:168] "Request Body" body=""
	I1212 00:19:22.287111   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:22.287377   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:22.287420   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:22.786275   48339 type.go:168] "Request Body" body=""
	I1212 00:19:22.786343   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:22.786646   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:23.286598   48339 type.go:168] "Request Body" body=""
	I1212 00:19:23.286692   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:23.287042   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:23.786834   48339 type.go:168] "Request Body" body=""
	I1212 00:19:23.786913   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:23.787199   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:24.286916   48339 type.go:168] "Request Body" body=""
	I1212 00:19:24.287018   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:24.287331   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:24.787102   48339 type.go:168] "Request Body" body=""
	I1212 00:19:24.787174   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:24.787525   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:24.787578   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:25.286170   48339 type.go:168] "Request Body" body=""
	I1212 00:19:25.286246   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:25.286510   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:25.787014   48339 type.go:168] "Request Body" body=""
	I1212 00:19:25.787086   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:25.787411   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:26.286133   48339 type.go:168] "Request Body" body=""
	I1212 00:19:26.286205   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:26.286533   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:26.786125   48339 type.go:168] "Request Body" body=""
	I1212 00:19:26.786195   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:26.786499   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:27.286222   48339 type.go:168] "Request Body" body=""
	I1212 00:19:27.286294   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:27.286619   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:27.286677   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:27.786180   48339 type.go:168] "Request Body" body=""
	I1212 00:19:27.786252   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:27.786586   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:28.286372   48339 type.go:168] "Request Body" body=""
	I1212 00:19:28.286448   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:28.286700   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:28.786195   48339 type.go:168] "Request Body" body=""
	I1212 00:19:28.786271   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:28.786605   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:29.286191   48339 type.go:168] "Request Body" body=""
	I1212 00:19:29.286267   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:29.286615   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:29.786910   48339 type.go:168] "Request Body" body=""
	I1212 00:19:29.786981   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:29.787247   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:29.787287   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:30.287073   48339 type.go:168] "Request Body" body=""
	I1212 00:19:30.287154   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:30.287499   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:30.786184   48339 type.go:168] "Request Body" body=""
	I1212 00:19:30.786259   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:30.786602   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:31.286871   48339 type.go:168] "Request Body" body=""
	I1212 00:19:31.286942   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:31.287207   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:31.786942   48339 type.go:168] "Request Body" body=""
	I1212 00:19:31.787038   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:31.787334   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:31.787377   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:32.287019   48339 type.go:168] "Request Body" body=""
	I1212 00:19:32.287094   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:32.287431   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:32.786241   48339 type.go:168] "Request Body" body=""
	I1212 00:19:32.786308   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:32.786562   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:33.286586   48339 type.go:168] "Request Body" body=""
	I1212 00:19:33.286669   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:33.287081   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:33.786842   48339 type.go:168] "Request Body" body=""
	I1212 00:19:33.786915   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:33.787232   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:34.286965   48339 type.go:168] "Request Body" body=""
	I1212 00:19:34.287052   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:34.287321   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:34.287371   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:34.787103   48339 type.go:168] "Request Body" body=""
	I1212 00:19:34.787184   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:34.787507   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:35.286199   48339 type.go:168] "Request Body" body=""
	I1212 00:19:35.286275   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:35.286623   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:35.786303   48339 type.go:168] "Request Body" body=""
	I1212 00:19:35.786378   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:35.786633   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:36.286201   48339 type.go:168] "Request Body" body=""
	I1212 00:19:36.286276   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:36.286623   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:36.786173   48339 type.go:168] "Request Body" body=""
	I1212 00:19:36.786245   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:36.786551   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:36.786609   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:37.286157   48339 type.go:168] "Request Body" body=""
	I1212 00:19:37.286229   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:37.286482   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:37.786158   48339 type.go:168] "Request Body" body=""
	I1212 00:19:37.786237   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:37.786552   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:38.286494   48339 type.go:168] "Request Body" body=""
	I1212 00:19:38.286574   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:38.286901   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:38.786471   48339 type.go:168] "Request Body" body=""
	I1212 00:19:38.786543   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:38.786828   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:38.786871   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:39.286234   48339 type.go:168] "Request Body" body=""
	I1212 00:19:39.286306   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:39.286633   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:39.786191   48339 type.go:168] "Request Body" body=""
	I1212 00:19:39.786273   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:39.786591   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:40.286174   48339 type.go:168] "Request Body" body=""
	I1212 00:19:40.286246   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:40.286501   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:40.786212   48339 type.go:168] "Request Body" body=""
	I1212 00:19:40.786284   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:40.786618   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:41.286308   48339 type.go:168] "Request Body" body=""
	I1212 00:19:41.286385   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:41.286717   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:41.286778   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:41.786259   48339 type.go:168] "Request Body" body=""
	I1212 00:19:41.786336   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:41.786616   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:42.286266   48339 type.go:168] "Request Body" body=""
	I1212 00:19:42.286426   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:42.286836   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:42.786557   48339 type.go:168] "Request Body" body=""
	I1212 00:19:42.786636   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:42.786968   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:43.286831   48339 type.go:168] "Request Body" body=""
	I1212 00:19:43.286907   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:43.287195   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:43.287247   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:43.786980   48339 type.go:168] "Request Body" body=""
	I1212 00:19:43.787071   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:43.787383   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:44.286097   48339 type.go:168] "Request Body" body=""
	I1212 00:19:44.286182   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:44.286516   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:44.787095   48339 type.go:168] "Request Body" body=""
	I1212 00:19:44.787170   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:44.787420   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:45.286197   48339 type.go:168] "Request Body" body=""
	I1212 00:19:45.286315   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:45.286686   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:45.786212   48339 type.go:168] "Request Body" body=""
	I1212 00:19:45.786292   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:45.786616   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:45.786667   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:46.286316   48339 type.go:168] "Request Body" body=""
	I1212 00:19:46.286391   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:46.286672   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:46.786182   48339 type.go:168] "Request Body" body=""
	I1212 00:19:46.786255   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:46.786571   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:47.286220   48339 type.go:168] "Request Body" body=""
	I1212 00:19:47.286293   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:47.286639   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:47.787075   48339 type.go:168] "Request Body" body=""
	I1212 00:19:47.787141   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:47.787388   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:47.787425   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:48.286333   48339 type.go:168] "Request Body" body=""
	I1212 00:19:48.286406   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:48.286742   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:48.786260   48339 type.go:168] "Request Body" body=""
	I1212 00:19:48.786335   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:48.786670   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:49.286373   48339 type.go:168] "Request Body" body=""
	I1212 00:19:49.286448   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:49.286721   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:49.786393   48339 type.go:168] "Request Body" body=""
	I1212 00:19:49.786466   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:49.786793   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:50.286556   48339 type.go:168] "Request Body" body=""
	I1212 00:19:50.286645   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:50.286977   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:50.287046   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:50.786244   48339 type.go:168] "Request Body" body=""
	I1212 00:19:50.786323   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:50.786639   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:51.286207   48339 type.go:168] "Request Body" body=""
	I1212 00:19:51.286281   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:51.286646   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:51.786236   48339 type.go:168] "Request Body" body=""
	I1212 00:19:51.786326   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:51.786698   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:52.286383   48339 type.go:168] "Request Body" body=""
	I1212 00:19:52.286453   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:52.286705   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:52.786430   48339 type.go:168] "Request Body" body=""
	I1212 00:19:52.786502   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:52.786808   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:52.786864   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.286726   48339 type.go:168] "Request Body" body=""
	I1212 00:19:53.286799   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:53.287127   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:53.786892   48339 type.go:168] "Request Body" body=""
	I1212 00:19:53.786963   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:53.787281   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:54.287089   48339 type.go:168] "Request Body" body=""
	I1212 00:19:54.287161   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:54.287510   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:54.787071   48339 type.go:168] "Request Body" body=""
	I1212 00:19:54.787148   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:54.787473   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:54.787523   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:55.287042   48339 type.go:168] "Request Body" body=""
	I1212 00:19:55.287120   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:55.287397   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:55.786097   48339 type.go:168] "Request Body" body=""
	I1212 00:19:55.786167   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:55.786471   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:56.286180   48339 type.go:168] "Request Body" body=""
	I1212 00:19:56.286255   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:56.286560   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:56.786731   48339 type.go:168] "Request Body" body=""
	I1212 00:19:56.786834   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:56.787097   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:57.286916   48339 type.go:168] "Request Body" body=""
	I1212 00:19:57.287011   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:57.287338   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:57.287392   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:57.787113   48339 type.go:168] "Request Body" body=""
	I1212 00:19:57.787195   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:57.787542   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:58.286383   48339 type.go:168] "Request Body" body=""
	I1212 00:19:58.286455   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:58.286708   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:58.786168   48339 type.go:168] "Request Body" body=""
	I1212 00:19:58.786245   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:58.786576   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:59.286179   48339 type.go:168] "Request Body" body=""
	I1212 00:19:59.286256   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:59.286592   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:59.786275   48339 type.go:168] "Request Body" body=""
	I1212 00:19:59.786344   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:59.786595   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:59.786633   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:00.286342   48339 type.go:168] "Request Body" body=""
	I1212 00:20:00.286436   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:00.286738   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:00.786599   48339 type.go:168] "Request Body" body=""
	I1212 00:20:00.786680   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:00.787175   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:01.286983   48339 type.go:168] "Request Body" body=""
	I1212 00:20:01.287070   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:01.287375   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:01.787109   48339 type.go:168] "Request Body" body=""
	I1212 00:20:01.787182   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:01.787524   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:01.787578   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:02.286136   48339 type.go:168] "Request Body" body=""
	I1212 00:20:02.286214   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:02.286541   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:02.786447   48339 type.go:168] "Request Body" body=""
	I1212 00:20:02.786522   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:02.786791   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:03.286727   48339 type.go:168] "Request Body" body=""
	I1212 00:20:03.286808   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:03.287147   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:03.786954   48339 type.go:168] "Request Body" body=""
	I1212 00:20:03.787051   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:03.787411   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:04.287105   48339 type.go:168] "Request Body" body=""
	I1212 00:20:04.287184   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:04.287440   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:04.287480   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:04.786199   48339 type.go:168] "Request Body" body=""
	I1212 00:20:04.786275   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:04.786621   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:05.286300   48339 type.go:168] "Request Body" body=""
	I1212 00:20:05.286378   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:05.286699   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:05.786189   48339 type.go:168] "Request Body" body=""
	I1212 00:20:05.786286   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:05.786574   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:06.286216   48339 type.go:168] "Request Body" body=""
	I1212 00:20:06.286291   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:06.286653   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:06.786351   48339 type.go:168] "Request Body" body=""
	I1212 00:20:06.786425   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:06.786777   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:06.786833   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:07.286483   48339 type.go:168] "Request Body" body=""
	I1212 00:20:07.286562   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:07.286815   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:07.786485   48339 type.go:168] "Request Body" body=""
	I1212 00:20:07.786559   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:07.786920   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:08.286761   48339 type.go:168] "Request Body" body=""
	I1212 00:20:08.286836   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:08.287188   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:08.786941   48339 type.go:168] "Request Body" body=""
	I1212 00:20:08.787029   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:08.787324   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:08.787386   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:09.287127   48339 type.go:168] "Request Body" body=""
	I1212 00:20:09.287201   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:09.287579   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:09.786165   48339 type.go:168] "Request Body" body=""
	I1212 00:20:09.786264   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:09.786669   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:10.286348   48339 type.go:168] "Request Body" body=""
	I1212 00:20:10.286420   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:10.286711   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:10.786398   48339 type.go:168] "Request Body" body=""
	I1212 00:20:10.786476   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:10.786785   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:11.286171   48339 type.go:168] "Request Body" body=""
	I1212 00:20:11.286251   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:11.286562   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:11.286616   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:11.786160   48339 type.go:168] "Request Body" body=""
	I1212 00:20:11.786237   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:11.786560   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:12.286232   48339 type.go:168] "Request Body" body=""
	I1212 00:20:12.286313   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:12.286631   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:12.786525   48339 type.go:168] "Request Body" body=""
	I1212 00:20:12.786596   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:12.786927   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:13.286693   48339 type.go:168] "Request Body" body=""
	I1212 00:20:13.286759   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:13.287036   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:13.287076   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:13.786823   48339 type.go:168] "Request Body" body=""
	I1212 00:20:13.786903   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:13.787250   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:14.287106   48339 type.go:168] "Request Body" body=""
	I1212 00:20:14.287193   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:14.287515   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:14.786201   48339 type.go:168] "Request Body" body=""
	I1212 00:20:14.786277   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:14.786598   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:15.286226   48339 type.go:168] "Request Body" body=""
	I1212 00:20:15.286303   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:15.286675   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:15.786377   48339 type.go:168] "Request Body" body=""
	I1212 00:20:15.786454   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:15.786771   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:15.786825   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:16.286146   48339 type.go:168] "Request Body" body=""
	I1212 00:20:16.286230   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:16.286475   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:16.786183   48339 type.go:168] "Request Body" body=""
	I1212 00:20:16.786256   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:16.786581   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:17.286264   48339 type.go:168] "Request Body" body=""
	I1212 00:20:17.286366   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:17.286686   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:17.786376   48339 type.go:168] "Request Body" body=""
	I1212 00:20:17.786459   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:17.786714   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:18.286808   48339 type.go:168] "Request Body" body=""
	I1212 00:20:18.286881   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:18.287211   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:18.287257   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:18.787025   48339 type.go:168] "Request Body" body=""
	I1212 00:20:18.787098   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:18.787407   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:19.286245   48339 type.go:168] "Request Body" body=""
	I1212 00:20:19.286455   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:19.287173   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:19.786138   48339 type.go:168] "Request Body" body=""
	I1212 00:20:19.786225   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:19.786578   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:20.286250   48339 type.go:168] "Request Body" body=""
	I1212 00:20:20.286325   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:20.286634   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:20.786202   48339 type.go:168] "Request Body" body=""
	I1212 00:20:20.786280   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:20.786538   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:20.786583   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:21.286166   48339 type.go:168] "Request Body" body=""
	I1212 00:20:21.286242   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:21.286538   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:21.786130   48339 type.go:168] "Request Body" body=""
	I1212 00:20:21.786205   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:21.786517   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:22.287092   48339 type.go:168] "Request Body" body=""
	I1212 00:20:22.287164   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:22.287411   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:22.786375   48339 type.go:168] "Request Body" body=""
	I1212 00:20:22.786456   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:22.786778   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:22.786829   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:23.286658   48339 type.go:168] "Request Body" body=""
	I1212 00:20:23.286731   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:23.287085   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:23.786836   48339 type.go:168] "Request Body" body=""
	I1212 00:20:23.786908   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:23.787187   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:24.286964   48339 type.go:168] "Request Body" body=""
	I1212 00:20:24.287062   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:24.287428   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:24.786115   48339 type.go:168] "Request Body" body=""
	I1212 00:20:24.786188   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:24.786524   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:25.286237   48339 type.go:168] "Request Body" body=""
	I1212 00:20:25.286345   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:25.286768   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:25.286850   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:25.786511   48339 type.go:168] "Request Body" body=""
	I1212 00:20:25.786607   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:25.786978   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:26.286790   48339 type.go:168] "Request Body" body=""
	I1212 00:20:26.286875   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:26.287221   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:26.786820   48339 type.go:168] "Request Body" body=""
	I1212 00:20:26.786891   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:26.787243   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:27.287025   48339 type.go:168] "Request Body" body=""
	I1212 00:20:27.287103   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:27.287476   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:27.287533   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:27.786181   48339 type.go:168] "Request Body" body=""
	I1212 00:20:27.786256   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:27.786579   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:28.286328   48339 type.go:168] "Request Body" body=""
	I1212 00:20:28.286403   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:28.286680   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:28.786377   48339 type.go:168] "Request Body" body=""
	I1212 00:20:28.786452   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:28.786763   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:29.286254   48339 type.go:168] "Request Body" body=""
	I1212 00:20:29.286331   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:29.286614   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:29.787081   48339 type.go:168] "Request Body" body=""
	I1212 00:20:29.787157   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:29.787430   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:29.787484   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:30.286195   48339 type.go:168] "Request Body" body=""
	I1212 00:20:30.286367   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:30.286726   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:30.786404   48339 type.go:168] "Request Body" body=""
	I1212 00:20:30.786481   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:30.786819   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:31.286538   48339 type.go:168] "Request Body" body=""
	I1212 00:20:31.286615   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:31.286953   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:31.786734   48339 type.go:168] "Request Body" body=""
	I1212 00:20:31.786823   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:31.787169   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:32.286853   48339 type.go:168] "Request Body" body=""
	I1212 00:20:32.286946   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:32.287277   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:32.287336   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:32.786355   48339 type.go:168] "Request Body" body=""
	I1212 00:20:32.786440   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:32.786710   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:33.286694   48339 type.go:168] "Request Body" body=""
	I1212 00:20:33.286774   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:33.287132   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:33.786905   48339 type.go:168] "Request Body" body=""
	I1212 00:20:33.786983   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:33.787332   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:34.287037   48339 type.go:168] "Request Body" body=""
	I1212 00:20:34.287105   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:34.287355   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:34.287394   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:34.787091   48339 type.go:168] "Request Body" body=""
	I1212 00:20:34.787167   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:34.787475   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:35.286183   48339 type.go:168] "Request Body" body=""
	I1212 00:20:35.286264   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:35.286585   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:35.786156   48339 type.go:168] "Request Body" body=""
	I1212 00:20:35.786225   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:35.786538   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:36.286227   48339 type.go:168] "Request Body" body=""
	I1212 00:20:36.286330   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:36.286653   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:36.786356   48339 type.go:168] "Request Body" body=""
	I1212 00:20:36.786434   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:36.786764   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:36.786818   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:37.286091   48339 type.go:168] "Request Body" body=""
	I1212 00:20:37.286166   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:37.286500   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:37.786184   48339 type.go:168] "Request Body" body=""
	I1212 00:20:37.786263   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:37.786572   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:38.286481   48339 type.go:168] "Request Body" body=""
	I1212 00:20:38.286552   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:38.286881   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:38.786443   48339 type.go:168] "Request Body" body=""
	I1212 00:20:38.786517   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:38.786773   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:39.286212   48339 type.go:168] "Request Body" body=""
	I1212 00:20:39.286290   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:39.286616   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:39.286667   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:39.786165   48339 type.go:168] "Request Body" body=""
	I1212 00:20:39.786242   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:39.786530   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:40.286181   48339 type.go:168] "Request Body" body=""
	I1212 00:20:40.286252   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:40.286503   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:40.786171   48339 type.go:168] "Request Body" body=""
	I1212 00:20:40.786243   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:40.786563   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:41.286127   48339 type.go:168] "Request Body" body=""
	I1212 00:20:41.286208   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:41.286529   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:41.787086   48339 type.go:168] "Request Body" body=""
	I1212 00:20:41.787155   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:41.787421   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:41.787466   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:42.286149   48339 type.go:168] "Request Body" body=""
	I1212 00:20:42.286244   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:42.286590   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:42.786361   48339 type.go:168] "Request Body" body=""
	I1212 00:20:42.786438   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:42.786779   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:43.286633   48339 type.go:168] "Request Body" body=""
	I1212 00:20:43.286702   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:43.286960   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:43.786722   48339 type.go:168] "Request Body" body=""
	I1212 00:20:43.786804   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:43.787206   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:44.286934   48339 type.go:168] "Request Body" body=""
	I1212 00:20:44.287023   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:44.287351   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:44.287409   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:44.786842   48339 type.go:168] "Request Body" body=""
	I1212 00:20:44.786917   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:44.787191   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:45.286977   48339 type.go:168] "Request Body" body=""
	I1212 00:20:45.287067   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:45.287390   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:45.787177   48339 type.go:168] "Request Body" body=""
	I1212 00:20:45.787257   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:45.787616   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:46.286986   48339 type.go:168] "Request Body" body=""
	I1212 00:20:46.287083   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:46.287348   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:46.787132   48339 type.go:168] "Request Body" body=""
	I1212 00:20:46.787205   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:46.787529   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:46.787585   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:47.286216   48339 type.go:168] "Request Body" body=""
	I1212 00:20:47.286289   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:47.286635   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:47.787077   48339 type.go:168] "Request Body" body=""
	I1212 00:20:47.787165   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:47.787464   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:48.286384   48339 type.go:168] "Request Body" body=""
	I1212 00:20:48.286461   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:48.286804   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:48.786191   48339 type.go:168] "Request Body" body=""
	I1212 00:20:48.786264   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:48.786586   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:49.286166   48339 type.go:168] "Request Body" body=""
	I1212 00:20:49.286240   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:49.286495   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:49.286545   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:49.786168   48339 type.go:168] "Request Body" body=""
	I1212 00:20:49.786246   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:49.786526   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:50.286237   48339 type.go:168] "Request Body" body=""
	I1212 00:20:50.286315   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:50.286678   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:50.786121   48339 type.go:168] "Request Body" body=""
	I1212 00:20:50.786187   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:50.786438   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:51.286121   48339 type.go:168] "Request Body" body=""
	I1212 00:20:51.286198   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:51.286527   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:51.286572   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:51.786155   48339 type.go:168] "Request Body" body=""
	I1212 00:20:51.786230   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:51.786579   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:52.286140   48339 type.go:168] "Request Body" body=""
	I1212 00:20:52.286212   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:52.286463   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:52.786341   48339 type.go:168] "Request Body" body=""
	I1212 00:20:52.786421   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:52.786710   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:53.286513   48339 type.go:168] "Request Body" body=""
	I1212 00:20:53.286636   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:53.286976   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:53.287052   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:53.786687   48339 type.go:168] "Request Body" body=""
	I1212 00:20:53.786760   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:53.787036   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:54.286863   48339 type.go:168] "Request Body" body=""
	I1212 00:20:54.286939   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:54.287249   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:54.787061   48339 type.go:168] "Request Body" body=""
	I1212 00:20:54.787143   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:54.787476   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:55.286970   48339 type.go:168] "Request Body" body=""
	I1212 00:20:55.287058   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:55.287308   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:55.287347   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:55.786924   48339 type.go:168] "Request Body" body=""
	I1212 00:20:55.787017   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:55.787330   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:56.287109   48339 type.go:168] "Request Body" body=""
	I1212 00:20:56.287182   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:56.287490   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:56.786897   48339 type.go:168] "Request Body" body=""
	I1212 00:20:56.786972   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:56.787241   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:57.287067   48339 type.go:168] "Request Body" body=""
	I1212 00:20:57.287145   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:57.287509   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:57.287566   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:57.786231   48339 type.go:168] "Request Body" body=""
	I1212 00:20:57.786303   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:57.786626   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:58.286503   48339 type.go:168] "Request Body" body=""
	I1212 00:20:58.286567   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:58.286819   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:58.786175   48339 type.go:168] "Request Body" body=""
	I1212 00:20:58.786249   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:58.786577   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:59.286221   48339 type.go:168] "Request Body" body=""
	I1212 00:20:59.286300   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:59.286643   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:59.786201   48339 type.go:168] "Request Body" body=""
	I1212 00:20:59.786272   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:59.786717   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:59.786766   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:21:00.286416   48339 type.go:168] "Request Body" body=""
	I1212 00:21:00.286498   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:00.286792   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:00.786190   48339 type.go:168] "Request Body" body=""
	I1212 00:21:00.786269   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:00.786582   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:01.286121   48339 type.go:168] "Request Body" body=""
	I1212 00:21:01.286194   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:01.286449   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:01.786199   48339 type.go:168] "Request Body" body=""
	I1212 00:21:01.786294   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:01.786641   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:02.286227   48339 type.go:168] "Request Body" body=""
	I1212 00:21:02.286306   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:02.286637   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:21:02.286688   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:21:02.786377   48339 type.go:168] "Request Body" body=""
	I1212 00:21:02.786458   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:02.786789   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:03.286595   48339 type.go:168] "Request Body" body=""
	I1212 00:21:03.286680   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:03.287072   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:03.786847   48339 type.go:168] "Request Body" body=""
	I1212 00:21:03.786925   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:03.787257   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:04.287036   48339 type.go:168] "Request Body" body=""
	I1212 00:21:04.287108   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:04.287431   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:21:04.287477   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:21:04.786103   48339 type.go:168] "Request Body" body=""
	I1212 00:21:04.786178   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:04.786510   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:05.286213   48339 type.go:168] "Request Body" body=""
	I1212 00:21:05.286293   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:05.286653   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:05.786170   48339 type.go:168] "Request Body" body=""
	I1212 00:21:05.786239   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:05.786497   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:06.286230   48339 type.go:168] "Request Body" body=""
	I1212 00:21:06.286305   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:06.286647   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:06.786360   48339 type.go:168] "Request Body" body=""
	I1212 00:21:06.786435   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:06.786771   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:21:06.786825   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:21:07.286459   48339 type.go:168] "Request Body" body=""
	I1212 00:21:07.286536   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:07.286784   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:07.786179   48339 type.go:168] "Request Body" body=""
	I1212 00:21:07.786260   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:07.786613   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:08.286429   48339 type.go:168] "Request Body" body=""
	I1212 00:21:08.286512   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:08.286882   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:08.786159   48339 type.go:168] "Request Body" body=""
	I1212 00:21:08.786230   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:08.791780   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	W1212 00:21:08.791841   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:21:09.286491   48339 type.go:168] "Request Body" body=""
	I1212 00:21:09.286564   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:09.286869   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:09.786199   48339 type.go:168] "Request Body" body=""
	I1212 00:21:09.786280   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:09.786589   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:10.286143   48339 type.go:168] "Request Body" body=""
	I1212 00:21:10.286219   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:10.286481   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:10.786184   48339 type.go:168] "Request Body" body=""
	I1212 00:21:10.786253   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:10.786584   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:11.286268   48339 type.go:168] "Request Body" body=""
	I1212 00:21:11.286353   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:11.286684   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:21:11.286736   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:21:11.786169   48339 type.go:168] "Request Body" body=""
	I1212 00:21:11.786241   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:11.786538   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:12.286254   48339 type.go:168] "Request Body" body=""
	I1212 00:21:12.286329   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:12.286629   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:12.786499   48339 type.go:168] "Request Body" body=""
	I1212 00:21:12.786576   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:12.786914   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:13.286651   48339 type.go:168] "Request Body" body=""
	I1212 00:21:13.286728   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:13.286985   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:21:13.287050   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:21:13.786749   48339 type.go:168] "Request Body" body=""
	I1212 00:21:13.786806   48339 node_ready.go:38] duration metric: took 6m0.00081197s for node "functional-767012" to be "Ready" ...
	I1212 00:21:13.789905   48339 out.go:203] 
	W1212 00:21:13.792750   48339 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1212 00:21:13.792769   48339 out.go:285] * 
	W1212 00:21:13.794879   48339 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:21:13.797575   48339 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.783378403Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.783446235Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.783574556Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.783659160Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.783717228Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.783783379Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.783840183Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.783901205Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.783968570Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.784048940Z" level=info msg="Connect containerd service"
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.784416877Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.785157717Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.794662816Z" level=info msg="Start subscribing containerd event"
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.795424792Z" level=info msg="Start recovering state"
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.795822604Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.795946897Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.833749922Z" level=info msg="Start event monitor"
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.833976992Z" level=info msg="Start cni network conf syncer for default"
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.834049535Z" level=info msg="Start streaming server"
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.834116111Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.834171899Z" level=info msg="runtime interface starting up..."
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.834228252Z" level=info msg="starting plugins..."
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.834293377Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 00:15:10 functional-767012 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.837148687Z" level=info msg="containerd successfully booted in 0.080353s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:21:15.736201    8420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:21:15.737269    8420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:21:15.738328    8420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:21:15.739844    8420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:21:15.740217    8420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec11 23:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014465] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.504479] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.038126] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.726220] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.947343] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 00:21:15 up  1:03,  0 user,  load average: 0.16, 0.26, 0.51
	Linux functional-767012 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 00:21:12 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:21:13 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 808.
	Dec 12 00:21:13 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:21:13 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:21:13 functional-767012 kubelet[8304]: E1212 00:21:13.330507    8304 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:21:13 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:21:13 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:21:14 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 809.
	Dec 12 00:21:14 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:21:14 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:21:14 functional-767012 kubelet[8309]: E1212 00:21:14.102134    8309 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:21:14 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:21:14 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:21:14 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 810.
	Dec 12 00:21:14 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:21:14 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:21:14 functional-767012 kubelet[8328]: E1212 00:21:14.829454    8328 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:21:14 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:21:14 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:21:15 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 811.
	Dec 12 00:21:15 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:21:15 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:21:15 functional-767012 kubelet[8380]: E1212 00:21:15.601873    8380 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:21:15 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:21:15 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-767012 -n functional-767012
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-767012 -n functional-767012: exit status 2 (435.889612ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-767012" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (368.59s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-767012 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-767012 get po -A: exit status 1 (58.523876ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-767012 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-767012 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-767012 get po -A"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-767012
helpers_test.go:244: (dbg) docker inspect functional-767012:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e",
	        "Created": "2025-12-12T00:06:52.261765556Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42951,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T00:06:52.317917194Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/hostname",
	        "HostsPath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/hosts",
	        "LogPath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e-json.log",
	        "Name": "/functional-767012",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-767012:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-767012",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e",
	                "LowerDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70-init/diff:/var/lib/docker/overlay2/4c0e5370e4fd7b4e6c6a79620ef377d7d55826709cd277e0cfa49c6005af0314/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-767012",
	                "Source": "/var/lib/docker/volumes/functional-767012/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-767012",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-767012",
	                "name.minikube.sigs.k8s.io": "functional-767012",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e781257da3adf1d3284ab2a6de0168c3db7957f25a7e53d0015250294193762d",
	            "SandboxKey": "/var/run/docker/netns/e781257da3ad",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-767012": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:4d:78:ba:7d:83",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "83467cc4cb13818b98ec0d7cb5fc0064ea6eb2c8db4256a8a81330921aa2d9a4",
	                    "EndpointID": "b787b732d8d748776ceeb6e65fab51cc1e79758446bc85ac20043b35593fab12",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-767012",
	                        "6585a82fe5e6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-767012 -n functional-767012
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-767012 -n functional-767012: exit status 2 (308.828292ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-095481 ssh sudo cat /etc/ssl/certs/42902.pem                                                                                                         │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ ssh            │ functional-095481 ssh sudo cat /usr/share/ca-certificates/42902.pem                                                                                             │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image          │ functional-095481 image ls                                                                                                                                      │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ ssh            │ functional-095481 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image          │ functional-095481 image load --daemon kicbase/echo-server:functional-095481 --alsologtostderr                                                                   │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image          │ functional-095481 image ls                                                                                                                                      │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image          │ functional-095481 image save kicbase/echo-server:functional-095481 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ update-context │ functional-095481 update-context --alsologtostderr -v=2                                                                                                         │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image          │ functional-095481 image rm kicbase/echo-server:functional-095481 --alsologtostderr                                                                              │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ update-context │ functional-095481 update-context --alsologtostderr -v=2                                                                                                         │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ update-context │ functional-095481 update-context --alsologtostderr -v=2                                                                                                         │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image          │ functional-095481 image ls                                                                                                                                      │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image          │ functional-095481 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image          │ functional-095481 image ls                                                                                                                                      │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image          │ functional-095481 image save --daemon kicbase/echo-server:functional-095481 --alsologtostderr                                                                   │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image          │ functional-095481 image ls --format yaml --alsologtostderr                                                                                                      │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image          │ functional-095481 image ls --format short --alsologtostderr                                                                                                     │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image          │ functional-095481 image ls --format table --alsologtostderr                                                                                                     │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image          │ functional-095481 image ls --format json --alsologtostderr                                                                                                      │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ ssh            │ functional-095481 ssh pgrep buildkitd                                                                                                                           │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │                     │
	│ image          │ functional-095481 image build -t localhost/my-image:functional-095481 testdata/build --alsologtostderr                                                          │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image          │ functional-095481 image ls                                                                                                                                      │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ delete         │ -p functional-095481                                                                                                                                            │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ start          │ -p functional-767012 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0         │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │                     │
	│ start          │ -p functional-767012 --alsologtostderr -v=8                                                                                                                     │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:15 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:15:08
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:15:08.188216   48339 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:15:08.188435   48339 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:15:08.188463   48339 out.go:374] Setting ErrFile to fd 2...
	I1212 00:15:08.188485   48339 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:15:08.188893   48339 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	I1212 00:15:08.189436   48339 out.go:368] Setting JSON to false
	I1212 00:15:08.190327   48339 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3455,"bootTime":1765495054,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1212 00:15:08.190468   48339 start.go:143] virtualization:  
	I1212 00:15:08.194075   48339 out.go:179] * [functional-767012] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 00:15:08.197745   48339 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:15:08.197889   48339 notify.go:221] Checking for updates...
	I1212 00:15:08.203623   48339 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:15:08.206559   48339 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 00:15:08.209313   48339 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	I1212 00:15:08.212202   48339 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:15:08.215231   48339 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:15:08.218454   48339 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 00:15:08.218601   48339 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:15:08.244528   48339 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 00:15:08.244655   48339 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:15:08.299617   48339 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 00:15:08.290252755 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:15:08.299730   48339 docker.go:319] overlay module found
	I1212 00:15:08.302863   48339 out.go:179] * Using the docker driver based on existing profile
	I1212 00:15:08.305730   48339 start.go:309] selected driver: docker
	I1212 00:15:08.305754   48339 start.go:927] validating driver "docker" against &{Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:15:08.305854   48339 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:15:08.305953   48339 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:15:08.359436   48339 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 00:15:08.349975764 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:15:08.359860   48339 cni.go:84] Creating CNI manager for ""
	I1212 00:15:08.359920   48339 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 00:15:08.359966   48339 start.go:353] cluster config:
	{Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:15:08.363136   48339 out.go:179] * Starting "functional-767012" primary control-plane node in "functional-767012" cluster
	I1212 00:15:08.365917   48339 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1212 00:15:08.368829   48339 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 00:15:08.371809   48339 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 00:15:08.371858   48339 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1212 00:15:08.371872   48339 cache.go:65] Caching tarball of preloaded images
	I1212 00:15:08.371970   48339 preload.go:238] Found /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1212 00:15:08.371992   48339 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1212 00:15:08.372099   48339 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/config.json ...
	I1212 00:15:08.372328   48339 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 00:15:08.391509   48339 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 00:15:08.391533   48339 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 00:15:08.391552   48339 cache.go:243] Successfully downloaded all kic artifacts
	I1212 00:15:08.391583   48339 start.go:360] acquireMachinesLock for functional-767012: {Name:mk41cf89e93a3830367886ebbef2bb8f6e99e3f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:15:08.391643   48339 start.go:364] duration metric: took 36.464µs to acquireMachinesLock for "functional-767012"
	I1212 00:15:08.391666   48339 start.go:96] Skipping create...Using existing machine configuration
	I1212 00:15:08.391675   48339 fix.go:54] fixHost starting: 
	I1212 00:15:08.391939   48339 cli_runner.go:164] Run: docker container inspect functional-767012 --format={{.State.Status}}
	I1212 00:15:08.408717   48339 fix.go:112] recreateIfNeeded on functional-767012: state=Running err=<nil>
	W1212 00:15:08.408748   48339 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 00:15:08.411849   48339 out.go:252] * Updating the running docker "functional-767012" container ...
	I1212 00:15:08.411881   48339 machine.go:94] provisionDockerMachine start ...
	I1212 00:15:08.411961   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:08.429482   48339 main.go:143] libmachine: Using SSH client type: native
	I1212 00:15:08.429817   48339 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 00:15:08.429834   48339 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 00:15:08.578648   48339 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-767012
	
	I1212 00:15:08.578671   48339 ubuntu.go:182] provisioning hostname "functional-767012"
	I1212 00:15:08.578741   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:08.596871   48339 main.go:143] libmachine: Using SSH client type: native
	I1212 00:15:08.597187   48339 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 00:15:08.597227   48339 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-767012 && echo "functional-767012" | sudo tee /etc/hostname
	I1212 00:15:08.759668   48339 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-767012
	
	I1212 00:15:08.759746   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:08.776780   48339 main.go:143] libmachine: Using SSH client type: native
	I1212 00:15:08.777096   48339 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 00:15:08.777119   48339 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-767012' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-767012/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-767012' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:15:08.931523   48339 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:15:08.931550   48339 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-2343/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-2343/.minikube}
	I1212 00:15:08.931582   48339 ubuntu.go:190] setting up certificates
	I1212 00:15:08.931592   48339 provision.go:84] configureAuth start
	I1212 00:15:08.931653   48339 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-767012
	I1212 00:15:08.952406   48339 provision.go:143] copyHostCerts
	I1212 00:15:08.952454   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem
	I1212 00:15:08.952497   48339 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem, removing ...
	I1212 00:15:08.952507   48339 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem
	I1212 00:15:08.952585   48339 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem (1675 bytes)
	I1212 00:15:08.952685   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem
	I1212 00:15:08.952707   48339 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem, removing ...
	I1212 00:15:08.952712   48339 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem
	I1212 00:15:08.952745   48339 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem (1082 bytes)
	I1212 00:15:08.952800   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem
	I1212 00:15:08.952821   48339 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem, removing ...
	I1212 00:15:08.952828   48339 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem
	I1212 00:15:08.952852   48339 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem (1123 bytes)
	I1212 00:15:08.952913   48339 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem org=jenkins.functional-767012 san=[127.0.0.1 192.168.49.2 functional-767012 localhost minikube]
	I1212 00:15:09.089842   48339 provision.go:177] copyRemoteCerts
	I1212 00:15:09.089908   48339 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:15:09.089956   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:09.108065   48339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:15:09.210645   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 00:15:09.210700   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 00:15:09.228116   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 00:15:09.228176   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 00:15:09.245824   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 00:15:09.245889   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:15:09.263086   48339 provision.go:87] duration metric: took 331.470752ms to configureAuth
	I1212 00:15:09.263116   48339 ubuntu.go:206] setting minikube options for container-runtime
	I1212 00:15:09.263293   48339 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 00:15:09.263306   48339 machine.go:97] duration metric: took 851.418761ms to provisionDockerMachine
	I1212 00:15:09.263315   48339 start.go:293] postStartSetup for "functional-767012" (driver="docker")
	I1212 00:15:09.263326   48339 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:15:09.263390   48339 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:15:09.263439   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:09.281753   48339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:15:09.386868   48339 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:15:09.390421   48339 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1212 00:15:09.390442   48339 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1212 00:15:09.390447   48339 command_runner.go:130] > VERSION_ID="12"
	I1212 00:15:09.390451   48339 command_runner.go:130] > VERSION="12 (bookworm)"
	I1212 00:15:09.390456   48339 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1212 00:15:09.390460   48339 command_runner.go:130] > ID=debian
	I1212 00:15:09.390464   48339 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1212 00:15:09.390469   48339 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1212 00:15:09.390475   48339 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1212 00:15:09.390546   48339 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:15:09.390568   48339 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 00:15:09.390580   48339 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/addons for local assets ...
	I1212 00:15:09.390640   48339 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/files for local assets ...
	I1212 00:15:09.390732   48339 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem -> 42902.pem in /etc/ssl/certs
	I1212 00:15:09.390742   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem -> /etc/ssl/certs/42902.pem
	I1212 00:15:09.390816   48339 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/test/nested/copy/4290/hosts -> hosts in /etc/test/nested/copy/4290
	I1212 00:15:09.390824   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/test/nested/copy/4290/hosts -> /etc/test/nested/copy/4290/hosts
	I1212 00:15:09.390867   48339 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4290
	I1212 00:15:09.398526   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /etc/ssl/certs/42902.pem (1708 bytes)
	I1212 00:15:09.416059   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/test/nested/copy/4290/hosts --> /etc/test/nested/copy/4290/hosts (40 bytes)
	I1212 00:15:09.433237   48339 start.go:296] duration metric: took 169.908089ms for postStartSetup
	I1212 00:15:09.433321   48339 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:15:09.433384   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:09.450800   48339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:15:09.556105   48339 command_runner.go:130] > 14%
	I1212 00:15:09.557034   48339 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:15:09.562380   48339 command_runner.go:130] > 169G
	I1212 00:15:09.562946   48339 fix.go:56] duration metric: took 1.171267005s for fixHost
	I1212 00:15:09.562967   48339 start.go:83] releasing machines lock for "functional-767012", held for 1.171312429s
	I1212 00:15:09.563050   48339 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-767012
	I1212 00:15:09.582602   48339 ssh_runner.go:195] Run: cat /version.json
	I1212 00:15:09.582654   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:09.582889   48339 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:15:09.582947   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:09.601106   48339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:15:09.627042   48339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:15:09.706722   48339 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1212 00:15:09.706847   48339 ssh_runner.go:195] Run: systemctl --version
	I1212 00:15:09.800321   48339 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 00:15:09.800390   48339 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1212 00:15:09.800423   48339 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1212 00:15:09.800514   48339 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 00:15:09.804624   48339 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 00:15:09.804945   48339 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:15:09.805036   48339 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:15:09.812955   48339 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 00:15:09.813030   48339 start.go:496] detecting cgroup driver to use...
	I1212 00:15:09.813095   48339 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 00:15:09.813242   48339 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 00:15:09.829352   48339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 00:15:09.842558   48339 docker.go:218] disabling cri-docker service (if available) ...
	I1212 00:15:09.842620   48339 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:15:09.858553   48339 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:15:09.872251   48339 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:15:10.008398   48339 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:15:10.140361   48339 docker.go:234] disabling docker service ...
	I1212 00:15:10.140425   48339 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:15:10.156860   48339 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:15:10.170461   48339 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:15:10.304156   48339 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:15:10.452566   48339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:15:10.465745   48339 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:15:10.479553   48339 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1212 00:15:10.480868   48339 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 00:15:10.489677   48339 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 00:15:10.498827   48339 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 00:15:10.498939   48339 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 00:15:10.508103   48339 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 00:15:10.516726   48339 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 00:15:10.525281   48339 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 00:15:10.533906   48339 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:15:10.541697   48339 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 00:15:10.550595   48339 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 00:15:10.559645   48339 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 00:15:10.568588   48339 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:15:10.575412   48339 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 00:15:10.576366   48339 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:15:10.583788   48339 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:15:10.698857   48339 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 00:15:10.837222   48339 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1212 00:15:10.837316   48339 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1212 00:15:10.841505   48339 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1212 00:15:10.841543   48339 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 00:15:10.841551   48339 command_runner.go:130] > Device: 0,72	Inode: 1614        Links: 1
	I1212 00:15:10.841558   48339 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 00:15:10.841564   48339 command_runner.go:130] > Access: 2025-12-12 00:15:10.793315522 +0000
	I1212 00:15:10.841569   48339 command_runner.go:130] > Modify: 2025-12-12 00:15:10.793315522 +0000
	I1212 00:15:10.841575   48339 command_runner.go:130] > Change: 2025-12-12 00:15:10.793315522 +0000
	I1212 00:15:10.841583   48339 command_runner.go:130] >  Birth: -
	I1212 00:15:10.841612   48339 start.go:564] Will wait 60s for crictl version
	I1212 00:15:10.841667   48339 ssh_runner.go:195] Run: which crictl
	I1212 00:15:10.845418   48339 command_runner.go:130] > /usr/local/bin/crictl
	I1212 00:15:10.845528   48339 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 00:15:10.867684   48339 command_runner.go:130] > Version:  0.1.0
	I1212 00:15:10.867710   48339 command_runner.go:130] > RuntimeName:  containerd
	I1212 00:15:10.867718   48339 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1212 00:15:10.867725   48339 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 00:15:10.869691   48339 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1212 00:15:10.869761   48339 ssh_runner.go:195] Run: containerd --version
	I1212 00:15:10.889630   48339 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1212 00:15:10.891644   48339 ssh_runner.go:195] Run: containerd --version
	I1212 00:15:10.909520   48339 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1212 00:15:10.917318   48339 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1212 00:15:10.920211   48339 cli_runner.go:164] Run: docker network inspect functional-767012 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:15:10.936971   48339 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 00:15:10.940949   48339 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1212 00:15:10.941183   48339 kubeadm.go:884] updating cluster {Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:15:10.941314   48339 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 00:15:10.941401   48339 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:15:10.964902   48339 command_runner.go:130] > {
	I1212 00:15:10.964923   48339 command_runner.go:130] >   "images":  [
	I1212 00:15:10.964934   48339 command_runner.go:130] >     {
	I1212 00:15:10.964944   48339 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1212 00:15:10.964949   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.964954   48339 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1212 00:15:10.964957   48339 command_runner.go:130] >       ],
	I1212 00:15:10.964962   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.964974   48339 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1212 00:15:10.964977   48339 command_runner.go:130] >       ],
	I1212 00:15:10.964982   48339 command_runner.go:130] >       "size":  "40636774",
	I1212 00:15:10.964989   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.964994   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.965005   48339 command_runner.go:130] >     },
	I1212 00:15:10.965009   48339 command_runner.go:130] >     {
	I1212 00:15:10.965017   48339 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1212 00:15:10.965023   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.965029   48339 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 00:15:10.965032   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965036   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.965047   48339 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1212 00:15:10.965050   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965054   48339 command_runner.go:130] >       "size":  "8034419",
	I1212 00:15:10.965058   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.965062   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.965068   48339 command_runner.go:130] >     },
	I1212 00:15:10.965071   48339 command_runner.go:130] >     {
	I1212 00:15:10.965079   48339 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1212 00:15:10.965085   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.965092   48339 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1212 00:15:10.965095   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965101   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.965112   48339 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1212 00:15:10.965115   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965121   48339 command_runner.go:130] >       "size":  "21168808",
	I1212 00:15:10.965129   48339 command_runner.go:130] >       "username":  "nonroot",
	I1212 00:15:10.965134   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.965137   48339 command_runner.go:130] >     },
	I1212 00:15:10.965143   48339 command_runner.go:130] >     {
	I1212 00:15:10.965152   48339 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1212 00:15:10.965164   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.965169   48339 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1212 00:15:10.965172   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965176   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.965190   48339 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1212 00:15:10.965193   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965199   48339 command_runner.go:130] >       "size":  "21136588",
	I1212 00:15:10.965203   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.965218   48339 command_runner.go:130] >         "value":  "0"
	I1212 00:15:10.965224   48339 command_runner.go:130] >       },
	I1212 00:15:10.965228   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.965231   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.965235   48339 command_runner.go:130] >     },
	I1212 00:15:10.965238   48339 command_runner.go:130] >     {
	I1212 00:15:10.965245   48339 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1212 00:15:10.965251   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.965256   48339 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1212 00:15:10.965262   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965266   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.965274   48339 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1212 00:15:10.965278   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965285   48339 command_runner.go:130] >       "size":  "24678359",
	I1212 00:15:10.965288   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.965296   48339 command_runner.go:130] >         "value":  "0"
	I1212 00:15:10.965302   48339 command_runner.go:130] >       },
	I1212 00:15:10.965306   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.965311   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.965314   48339 command_runner.go:130] >     },
	I1212 00:15:10.965323   48339 command_runner.go:130] >     {
	I1212 00:15:10.965332   48339 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1212 00:15:10.965345   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.965350   48339 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1212 00:15:10.965354   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965358   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.965373   48339 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1212 00:15:10.965377   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965381   48339 command_runner.go:130] >       "size":  "20661043",
	I1212 00:15:10.965385   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.965392   48339 command_runner.go:130] >         "value":  "0"
	I1212 00:15:10.965395   48339 command_runner.go:130] >       },
	I1212 00:15:10.965399   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.965403   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.965406   48339 command_runner.go:130] >     },
	I1212 00:15:10.965412   48339 command_runner.go:130] >     {
	I1212 00:15:10.965420   48339 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1212 00:15:10.965426   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.965431   48339 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1212 00:15:10.965434   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965438   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.965446   48339 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1212 00:15:10.965453   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965457   48339 command_runner.go:130] >       "size":  "22429671",
	I1212 00:15:10.965461   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.965465   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.965469   48339 command_runner.go:130] >     },
	I1212 00:15:10.965475   48339 command_runner.go:130] >     {
	I1212 00:15:10.965482   48339 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1212 00:15:10.965486   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.965492   48339 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1212 00:15:10.965497   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965502   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.965515   48339 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1212 00:15:10.965522   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965526   48339 command_runner.go:130] >       "size":  "15391364",
	I1212 00:15:10.965530   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.965534   48339 command_runner.go:130] >         "value":  "0"
	I1212 00:15:10.965539   48339 command_runner.go:130] >       },
	I1212 00:15:10.965543   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.965553   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.965556   48339 command_runner.go:130] >     },
	I1212 00:15:10.965559   48339 command_runner.go:130] >     {
	I1212 00:15:10.965566   48339 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1212 00:15:10.965570   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.965574   48339 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1212 00:15:10.965578   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965582   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.965591   48339 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1212 00:15:10.965602   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965606   48339 command_runner.go:130] >       "size":  "267939",
	I1212 00:15:10.965610   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.965614   48339 command_runner.go:130] >         "value":  "65535"
	I1212 00:15:10.965617   48339 command_runner.go:130] >       },
	I1212 00:15:10.965628   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.965632   48339 command_runner.go:130] >       "pinned":  true
	I1212 00:15:10.965635   48339 command_runner.go:130] >     }
	I1212 00:15:10.965638   48339 command_runner.go:130] >   ]
	I1212 00:15:10.965640   48339 command_runner.go:130] > }
	I1212 00:15:10.968555   48339 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 00:15:10.968581   48339 containerd.go:534] Images already preloaded, skipping extraction
	I1212 00:15:10.968640   48339 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:15:10.995305   48339 command_runner.go:130] > {
	I1212 00:15:10.995329   48339 command_runner.go:130] >   "images":  [
	I1212 00:15:10.995334   48339 command_runner.go:130] >     {
	I1212 00:15:10.995344   48339 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1212 00:15:10.995349   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.995355   48339 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1212 00:15:10.995359   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995375   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.995392   48339 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1212 00:15:10.995395   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995400   48339 command_runner.go:130] >       "size":  "40636774",
	I1212 00:15:10.995404   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.995408   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.995414   48339 command_runner.go:130] >     },
	I1212 00:15:10.995418   48339 command_runner.go:130] >     {
	I1212 00:15:10.995429   48339 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1212 00:15:10.995438   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.995444   48339 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 00:15:10.995448   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995452   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.995466   48339 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1212 00:15:10.995470   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995475   48339 command_runner.go:130] >       "size":  "8034419",
	I1212 00:15:10.995483   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.995487   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.995490   48339 command_runner.go:130] >     },
	I1212 00:15:10.995493   48339 command_runner.go:130] >     {
	I1212 00:15:10.995500   48339 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1212 00:15:10.995506   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.995512   48339 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1212 00:15:10.995515   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995524   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.995536   48339 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1212 00:15:10.995540   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995544   48339 command_runner.go:130] >       "size":  "21168808",
	I1212 00:15:10.995554   48339 command_runner.go:130] >       "username":  "nonroot",
	I1212 00:15:10.995558   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.995561   48339 command_runner.go:130] >     },
	I1212 00:15:10.995564   48339 command_runner.go:130] >     {
	I1212 00:15:10.995572   48339 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1212 00:15:10.995583   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.995588   48339 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1212 00:15:10.995592   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995596   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.995603   48339 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1212 00:15:10.995611   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995615   48339 command_runner.go:130] >       "size":  "21136588",
	I1212 00:15:10.995619   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.995623   48339 command_runner.go:130] >         "value":  "0"
	I1212 00:15:10.995631   48339 command_runner.go:130] >       },
	I1212 00:15:10.995635   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.995639   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.995642   48339 command_runner.go:130] >     },
	I1212 00:15:10.995646   48339 command_runner.go:130] >     {
	I1212 00:15:10.995659   48339 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1212 00:15:10.995663   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.995678   48339 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1212 00:15:10.995687   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995692   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.995701   48339 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1212 00:15:10.995709   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995713   48339 command_runner.go:130] >       "size":  "24678359",
	I1212 00:15:10.995716   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.995727   48339 command_runner.go:130] >         "value":  "0"
	I1212 00:15:10.995734   48339 command_runner.go:130] >       },
	I1212 00:15:10.995738   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.995743   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.995746   48339 command_runner.go:130] >     },
	I1212 00:15:10.995749   48339 command_runner.go:130] >     {
	I1212 00:15:10.995756   48339 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1212 00:15:10.995762   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.995768   48339 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1212 00:15:10.995771   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995782   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.995795   48339 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1212 00:15:10.995798   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995802   48339 command_runner.go:130] >       "size":  "20661043",
	I1212 00:15:10.995811   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.995815   48339 command_runner.go:130] >         "value":  "0"
	I1212 00:15:10.995820   48339 command_runner.go:130] >       },
	I1212 00:15:10.995830   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.995834   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.995838   48339 command_runner.go:130] >     },
	I1212 00:15:10.995841   48339 command_runner.go:130] >     {
	I1212 00:15:10.995847   48339 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1212 00:15:10.995854   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.995859   48339 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1212 00:15:10.995863   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995867   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.995877   48339 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1212 00:15:10.995884   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995888   48339 command_runner.go:130] >       "size":  "22429671",
	I1212 00:15:10.995893   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.995902   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.995906   48339 command_runner.go:130] >     },
	I1212 00:15:10.995909   48339 command_runner.go:130] >     {
	I1212 00:15:10.995916   48339 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1212 00:15:10.995924   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.995929   48339 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1212 00:15:10.995933   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995937   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.995948   48339 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1212 00:15:10.995952   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995956   48339 command_runner.go:130] >       "size":  "15391364",
	I1212 00:15:10.995963   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.995967   48339 command_runner.go:130] >         "value":  "0"
	I1212 00:15:10.995983   48339 command_runner.go:130] >       },
	I1212 00:15:10.995993   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.995997   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.996001   48339 command_runner.go:130] >     },
	I1212 00:15:10.996004   48339 command_runner.go:130] >     {
	I1212 00:15:10.996011   48339 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1212 00:15:10.996020   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.996025   48339 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1212 00:15:10.996029   48339 command_runner.go:130] >       ],
	I1212 00:15:10.996033   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.996046   48339 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1212 00:15:10.996053   48339 command_runner.go:130] >       ],
	I1212 00:15:10.996057   48339 command_runner.go:130] >       "size":  "267939",
	I1212 00:15:10.996061   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.996065   48339 command_runner.go:130] >         "value":  "65535"
	I1212 00:15:10.996074   48339 command_runner.go:130] >       },
	I1212 00:15:10.996078   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.996086   48339 command_runner.go:130] >       "pinned":  true
	I1212 00:15:10.996089   48339 command_runner.go:130] >     }
	I1212 00:15:10.996095   48339 command_runner.go:130] >   ]
	I1212 00:15:10.996103   48339 command_runner.go:130] > }
	I1212 00:15:10.997943   48339 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 00:15:10.997972   48339 cache_images.go:86] Images are preloaded, skipping loading
	I1212 00:15:10.997981   48339 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1212 00:15:10.998119   48339 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-767012 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:15:10.998212   48339 ssh_runner.go:195] Run: sudo crictl info
	I1212 00:15:11.021367   48339 command_runner.go:130] > {
	I1212 00:15:11.021387   48339 command_runner.go:130] >   "cniconfig": {
	I1212 00:15:11.021393   48339 command_runner.go:130] >     "Networks": [
	I1212 00:15:11.021397   48339 command_runner.go:130] >       {
	I1212 00:15:11.021403   48339 command_runner.go:130] >         "Config": {
	I1212 00:15:11.021408   48339 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1212 00:15:11.021413   48339 command_runner.go:130] >           "Name": "cni-loopback",
	I1212 00:15:11.021418   48339 command_runner.go:130] >           "Plugins": [
	I1212 00:15:11.021422   48339 command_runner.go:130] >             {
	I1212 00:15:11.021426   48339 command_runner.go:130] >               "Network": {
	I1212 00:15:11.021430   48339 command_runner.go:130] >                 "ipam": {},
	I1212 00:15:11.021438   48339 command_runner.go:130] >                 "type": "loopback"
	I1212 00:15:11.021445   48339 command_runner.go:130] >               },
	I1212 00:15:11.021450   48339 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1212 00:15:11.021457   48339 command_runner.go:130] >             }
	I1212 00:15:11.021461   48339 command_runner.go:130] >           ],
	I1212 00:15:11.021470   48339 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1212 00:15:11.021474   48339 command_runner.go:130] >         },
	I1212 00:15:11.021485   48339 command_runner.go:130] >         "IFName": "lo"
	I1212 00:15:11.021489   48339 command_runner.go:130] >       }
	I1212 00:15:11.021493   48339 command_runner.go:130] >     ],
	I1212 00:15:11.021498   48339 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1212 00:15:11.021504   48339 command_runner.go:130] >     "PluginDirs": [
	I1212 00:15:11.021509   48339 command_runner.go:130] >       "/opt/cni/bin"
	I1212 00:15:11.021514   48339 command_runner.go:130] >     ],
	I1212 00:15:11.021525   48339 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1212 00:15:11.021533   48339 command_runner.go:130] >     "Prefix": "eth"
	I1212 00:15:11.021537   48339 command_runner.go:130] >   },
	I1212 00:15:11.021540   48339 command_runner.go:130] >   "config": {
	I1212 00:15:11.021546   48339 command_runner.go:130] >     "cdiSpecDirs": [
	I1212 00:15:11.021552   48339 command_runner.go:130] >       "/etc/cdi",
	I1212 00:15:11.021558   48339 command_runner.go:130] >       "/var/run/cdi"
	I1212 00:15:11.021560   48339 command_runner.go:130] >     ],
	I1212 00:15:11.021563   48339 command_runner.go:130] >     "cni": {
	I1212 00:15:11.021567   48339 command_runner.go:130] >       "binDir": "",
	I1212 00:15:11.021571   48339 command_runner.go:130] >       "binDirs": [
	I1212 00:15:11.021574   48339 command_runner.go:130] >         "/opt/cni/bin"
	I1212 00:15:11.021577   48339 command_runner.go:130] >       ],
	I1212 00:15:11.021582   48339 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1212 00:15:11.021585   48339 command_runner.go:130] >       "confTemplate": "",
	I1212 00:15:11.021589   48339 command_runner.go:130] >       "ipPref": "",
	I1212 00:15:11.021592   48339 command_runner.go:130] >       "maxConfNum": 1,
	I1212 00:15:11.021597   48339 command_runner.go:130] >       "setupSerially": false,
	I1212 00:15:11.021601   48339 command_runner.go:130] >       "useInternalLoopback": false
	I1212 00:15:11.021604   48339 command_runner.go:130] >     },
	I1212 00:15:11.021610   48339 command_runner.go:130] >     "containerd": {
	I1212 00:15:11.021614   48339 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1212 00:15:11.021619   48339 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1212 00:15:11.021624   48339 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1212 00:15:11.021627   48339 command_runner.go:130] >       "runtimes": {
	I1212 00:15:11.021630   48339 command_runner.go:130] >         "runc": {
	I1212 00:15:11.021635   48339 command_runner.go:130] >           "ContainerAnnotations": null,
	I1212 00:15:11.021639   48339 command_runner.go:130] >           "PodAnnotations": null,
	I1212 00:15:11.021644   48339 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1212 00:15:11.021648   48339 command_runner.go:130] >           "cgroupWritable": false,
	I1212 00:15:11.021652   48339 command_runner.go:130] >           "cniConfDir": "",
	I1212 00:15:11.021656   48339 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1212 00:15:11.021664   48339 command_runner.go:130] >           "io_type": "",
	I1212 00:15:11.021670   48339 command_runner.go:130] >           "options": {
	I1212 00:15:11.021675   48339 command_runner.go:130] >             "BinaryName": "",
	I1212 00:15:11.021683   48339 command_runner.go:130] >             "CriuImagePath": "",
	I1212 00:15:11.021695   48339 command_runner.go:130] >             "CriuWorkPath": "",
	I1212 00:15:11.021703   48339 command_runner.go:130] >             "IoGid": 0,
	I1212 00:15:11.021708   48339 command_runner.go:130] >             "IoUid": 0,
	I1212 00:15:11.021712   48339 command_runner.go:130] >             "NoNewKeyring": false,
	I1212 00:15:11.021716   48339 command_runner.go:130] >             "Root": "",
	I1212 00:15:11.021723   48339 command_runner.go:130] >             "ShimCgroup": "",
	I1212 00:15:11.021728   48339 command_runner.go:130] >             "SystemdCgroup": false
	I1212 00:15:11.021734   48339 command_runner.go:130] >           },
	I1212 00:15:11.021739   48339 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1212 00:15:11.021745   48339 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1212 00:15:11.021749   48339 command_runner.go:130] >           "runtimePath": "",
	I1212 00:15:11.021755   48339 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1212 00:15:11.021761   48339 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1212 00:15:11.021765   48339 command_runner.go:130] >           "snapshotter": ""
	I1212 00:15:11.021770   48339 command_runner.go:130] >         }
	I1212 00:15:11.021774   48339 command_runner.go:130] >       }
	I1212 00:15:11.021778   48339 command_runner.go:130] >     },
	I1212 00:15:11.021790   48339 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1212 00:15:11.021799   48339 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1212 00:15:11.021805   48339 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1212 00:15:11.021810   48339 command_runner.go:130] >     "disableApparmor": false,
	I1212 00:15:11.021816   48339 command_runner.go:130] >     "disableHugetlbController": true,
	I1212 00:15:11.021821   48339 command_runner.go:130] >     "disableProcMount": false,
	I1212 00:15:11.021825   48339 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1212 00:15:11.021828   48339 command_runner.go:130] >     "enableCDI": true,
	I1212 00:15:11.021832   48339 command_runner.go:130] >     "enableSelinux": false,
	I1212 00:15:11.021840   48339 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1212 00:15:11.021845   48339 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1212 00:15:11.021852   48339 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1212 00:15:11.021858   48339 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1212 00:15:11.021868   48339 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1212 00:15:11.021873   48339 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1212 00:15:11.021877   48339 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1212 00:15:11.021886   48339 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1212 00:15:11.021890   48339 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1212 00:15:11.021896   48339 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1212 00:15:11.021901   48339 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1212 00:15:11.021907   48339 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1212 00:15:11.021910   48339 command_runner.go:130] >   },
	I1212 00:15:11.021914   48339 command_runner.go:130] >   "features": {
	I1212 00:15:11.021919   48339 command_runner.go:130] >     "supplemental_groups_policy": true
	I1212 00:15:11.021922   48339 command_runner.go:130] >   },
	I1212 00:15:11.021926   48339 command_runner.go:130] >   "golang": "go1.24.9",
	I1212 00:15:11.021938   48339 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1212 00:15:11.021951   48339 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1212 00:15:11.021954   48339 command_runner.go:130] >   "runtimeHandlers": [
	I1212 00:15:11.021957   48339 command_runner.go:130] >     {
	I1212 00:15:11.021961   48339 command_runner.go:130] >       "features": {
	I1212 00:15:11.021973   48339 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1212 00:15:11.021977   48339 command_runner.go:130] >         "user_namespaces": true
	I1212 00:15:11.021984   48339 command_runner.go:130] >       }
	I1212 00:15:11.021991   48339 command_runner.go:130] >     },
	I1212 00:15:11.021996   48339 command_runner.go:130] >     {
	I1212 00:15:11.022000   48339 command_runner.go:130] >       "features": {
	I1212 00:15:11.022006   48339 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1212 00:15:11.022013   48339 command_runner.go:130] >         "user_namespaces": true
	I1212 00:15:11.022016   48339 command_runner.go:130] >       },
	I1212 00:15:11.022021   48339 command_runner.go:130] >       "name": "runc"
	I1212 00:15:11.022026   48339 command_runner.go:130] >     }
	I1212 00:15:11.022029   48339 command_runner.go:130] >   ],
	I1212 00:15:11.022033   48339 command_runner.go:130] >   "status": {
	I1212 00:15:11.022045   48339 command_runner.go:130] >     "conditions": [
	I1212 00:15:11.022048   48339 command_runner.go:130] >       {
	I1212 00:15:11.022055   48339 command_runner.go:130] >         "message": "",
	I1212 00:15:11.022059   48339 command_runner.go:130] >         "reason": "",
	I1212 00:15:11.022065   48339 command_runner.go:130] >         "status": true,
	I1212 00:15:11.022070   48339 command_runner.go:130] >         "type": "RuntimeReady"
	I1212 00:15:11.022073   48339 command_runner.go:130] >       },
	I1212 00:15:11.022076   48339 command_runner.go:130] >       {
	I1212 00:15:11.022083   48339 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1212 00:15:11.022087   48339 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1212 00:15:11.022094   48339 command_runner.go:130] >         "status": false,
	I1212 00:15:11.022099   48339 command_runner.go:130] >         "type": "NetworkReady"
	I1212 00:15:11.022104   48339 command_runner.go:130] >       },
	I1212 00:15:11.022107   48339 command_runner.go:130] >       {
	I1212 00:15:11.022132   48339 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1212 00:15:11.022141   48339 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1212 00:15:11.022149   48339 command_runner.go:130] >         "status": false,
	I1212 00:15:11.022155   48339 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1212 00:15:11.022158   48339 command_runner.go:130] >       }
	I1212 00:15:11.022161   48339 command_runner.go:130] >     ]
	I1212 00:15:11.022164   48339 command_runner.go:130] >   }
	I1212 00:15:11.022166   48339 command_runner.go:130] > }
	I1212 00:15:11.024522   48339 cni.go:84] Creating CNI manager for ""
	I1212 00:15:11.024547   48339 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 00:15:11.024564   48339 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 00:15:11.024607   48339 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-767012 NodeName:functional-767012 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:15:11.024773   48339 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-767012"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:15:11.024850   48339 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 00:15:11.031979   48339 command_runner.go:130] > kubeadm
	I1212 00:15:11.031999   48339 command_runner.go:130] > kubectl
	I1212 00:15:11.032004   48339 command_runner.go:130] > kubelet
	I1212 00:15:11.033031   48339 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 00:15:11.033131   48339 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:15:11.041032   48339 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1212 00:15:11.054723   48339 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 00:15:11.067854   48339 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1212 00:15:11.081373   48339 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:15:11.085014   48339 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1212 00:15:11.085116   48339 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:15:11.226173   48339 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:15:12.035778   48339 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012 for IP: 192.168.49.2
	I1212 00:15:12.035798   48339 certs.go:195] generating shared ca certs ...
	I1212 00:15:12.035830   48339 certs.go:227] acquiring lock for ca certs: {Name:mk18ed2fce74cbc4ee01c0f71e2dbdd98ccce1cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:15:12.035967   48339 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key
	I1212 00:15:12.036010   48339 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key
	I1212 00:15:12.036017   48339 certs.go:257] generating profile certs ...
	I1212 00:15:12.036117   48339 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.key
	I1212 00:15:12.036165   48339 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.key.fcbff5a4
	I1212 00:15:12.036201   48339 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.key
	I1212 00:15:12.036209   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 00:15:12.036224   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 00:15:12.036235   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 00:15:12.036248   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 00:15:12.036258   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 00:15:12.036270   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 00:15:12.036281   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 00:15:12.036294   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 00:15:12.036341   48339 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem (1338 bytes)
	W1212 00:15:12.036372   48339 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290_empty.pem, impossibly tiny 0 bytes
	I1212 00:15:12.036381   48339 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 00:15:12.036409   48339 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem (1082 bytes)
	I1212 00:15:12.036440   48339 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:15:12.036468   48339 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem (1675 bytes)
	I1212 00:15:12.036516   48339 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem (1708 bytes)
	I1212 00:15:12.036546   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem -> /usr/share/ca-certificates/4290.pem
	I1212 00:15:12.036558   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem -> /usr/share/ca-certificates/42902.pem
	I1212 00:15:12.036578   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:15:12.037134   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:15:12.059224   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 00:15:12.079145   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:15:12.096868   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:15:12.114531   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 00:15:12.132828   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 00:15:12.150161   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:15:12.168014   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:15:12.185251   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem --> /usr/share/ca-certificates/4290.pem (1338 bytes)
	I1212 00:15:12.202557   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /usr/share/ca-certificates/42902.pem (1708 bytes)
	I1212 00:15:12.219625   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:15:12.237574   48339 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:15:12.250472   48339 ssh_runner.go:195] Run: openssl version
	I1212 00:15:12.256541   48339 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1212 00:15:12.256947   48339 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4290.pem
	I1212 00:15:12.264387   48339 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4290.pem /etc/ssl/certs/4290.pem
	I1212 00:15:12.271688   48339 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4290.pem
	I1212 00:15:12.275404   48339 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 00:06 /usr/share/ca-certificates/4290.pem
	I1212 00:15:12.275432   48339 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:06 /usr/share/ca-certificates/4290.pem
	I1212 00:15:12.275482   48339 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4290.pem
	I1212 00:15:12.315860   48339 command_runner.go:130] > 51391683
	I1212 00:15:12.316400   48339 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 00:15:12.323656   48339 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42902.pem
	I1212 00:15:12.330945   48339 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42902.pem /etc/ssl/certs/42902.pem
	I1212 00:15:12.339131   48339 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42902.pem
	I1212 00:15:12.343064   48339 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 00:06 /usr/share/ca-certificates/42902.pem
	I1212 00:15:12.343159   48339 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:06 /usr/share/ca-certificates/42902.pem
	I1212 00:15:12.343241   48339 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42902.pem
	I1212 00:15:12.383845   48339 command_runner.go:130] > 3ec20f2e
	I1212 00:15:12.384302   48339 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 00:15:12.391740   48339 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:15:12.398710   48339 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:15:12.406076   48339 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:15:12.409726   48339 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:15:12.409770   48339 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:15:12.409826   48339 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:15:12.450507   48339 command_runner.go:130] > b5213941
	I1212 00:15:12.450926   48339 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:15:12.458188   48339 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:15:12.461873   48339 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:15:12.461949   48339 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1212 00:15:12.461961   48339 command_runner.go:130] > Device: 259,1	Inode: 1311423     Links: 1
	I1212 00:15:12.461969   48339 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 00:15:12.461975   48339 command_runner.go:130] > Access: 2025-12-12 00:11:05.099200071 +0000
	I1212 00:15:12.461979   48339 command_runner.go:130] > Modify: 2025-12-12 00:07:00.969098600 +0000
	I1212 00:15:12.461984   48339 command_runner.go:130] > Change: 2025-12-12 00:07:00.969098600 +0000
	I1212 00:15:12.461989   48339 command_runner.go:130] >  Birth: 2025-12-12 00:07:00.969098600 +0000
	I1212 00:15:12.462077   48339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 00:15:12.504549   48339 command_runner.go:130] > Certificate will not expire
	I1212 00:15:12.505002   48339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 00:15:12.545847   48339 command_runner.go:130] > Certificate will not expire
	I1212 00:15:12.545927   48339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 00:15:12.586405   48339 command_runner.go:130] > Certificate will not expire
	I1212 00:15:12.586767   48339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 00:15:12.629151   48339 command_runner.go:130] > Certificate will not expire
	I1212 00:15:12.629637   48339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 00:15:12.671966   48339 command_runner.go:130] > Certificate will not expire
	I1212 00:15:12.672529   48339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 00:15:12.713858   48339 command_runner.go:130] > Certificate will not expire
	I1212 00:15:12.714272   48339 kubeadm.go:401] StartCluster: {Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:15:12.714367   48339 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1212 00:15:12.714442   48339 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:15:12.749902   48339 cri.go:89] found id: ""
	I1212 00:15:12.750000   48339 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:15:12.759407   48339 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1212 00:15:12.759429   48339 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1212 00:15:12.759437   48339 command_runner.go:130] > /var/lib/minikube/etcd:
	I1212 00:15:12.760379   48339 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 00:15:12.760398   48339 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 00:15:12.760457   48339 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 00:15:12.768161   48339 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:15:12.768602   48339 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-767012" does not appear in /home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 00:15:12.768706   48339 kubeconfig.go:62] /home/jenkins/minikube-integration/22101-2343/kubeconfig needs updating (will repair): [kubeconfig missing "functional-767012" cluster setting kubeconfig missing "functional-767012" context setting]
	I1212 00:15:12.769002   48339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/kubeconfig: {Name:mk3e2cbffa089f0a26351f5f6a49b8ed3b66edd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:15:12.769434   48339 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 00:15:12.769575   48339 kapi.go:59] client config for functional-767012: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt", KeyFile:"/home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.key", CAFile:"/home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 00:15:12.770098   48339 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 00:15:12.770119   48339 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 00:15:12.770125   48339 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 00:15:12.770129   48339 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 00:15:12.770134   48339 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 00:15:12.770402   48339 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 00:15:12.770508   48339 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1212 00:15:12.778529   48339 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1212 00:15:12.778562   48339 kubeadm.go:602] duration metric: took 18.158491ms to restartPrimaryControlPlane
	I1212 00:15:12.778572   48339 kubeadm.go:403] duration metric: took 64.30535ms to StartCluster
	I1212 00:15:12.778619   48339 settings.go:142] acquiring lock: {Name:mk6dd4250df69aeba4752e9f33aeef37272375c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:15:12.778710   48339 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 00:15:12.779343   48339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/kubeconfig: {Name:mk3e2cbffa089f0a26351f5f6a49b8ed3b66edd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:15:12.779578   48339 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1212 00:15:12.779758   48339 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 00:15:12.779798   48339 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 00:15:12.779860   48339 addons.go:70] Setting storage-provisioner=true in profile "functional-767012"
	I1212 00:15:12.779873   48339 addons.go:239] Setting addon storage-provisioner=true in "functional-767012"
	I1212 00:15:12.779899   48339 host.go:66] Checking if "functional-767012" exists ...
	I1212 00:15:12.780379   48339 cli_runner.go:164] Run: docker container inspect functional-767012 --format={{.State.Status}}
	I1212 00:15:12.780789   48339 addons.go:70] Setting default-storageclass=true in profile "functional-767012"
	I1212 00:15:12.780811   48339 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-767012"
	I1212 00:15:12.781090   48339 cli_runner.go:164] Run: docker container inspect functional-767012 --format={{.State.Status}}
	I1212 00:15:12.784774   48339 out.go:179] * Verifying Kubernetes components...
	I1212 00:15:12.788318   48339 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:15:12.822440   48339 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 00:15:12.822619   48339 kapi.go:59] client config for functional-767012: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt", KeyFile:"/home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.key", CAFile:"/home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 00:15:12.822882   48339 addons.go:239] Setting addon default-storageclass=true in "functional-767012"
	I1212 00:15:12.822910   48339 host.go:66] Checking if "functional-767012" exists ...
	I1212 00:15:12.823362   48339 cli_runner.go:164] Run: docker container inspect functional-767012 --format={{.State.Status}}
	I1212 00:15:12.828706   48339 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:15:12.831719   48339 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:12.831746   48339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 00:15:12.831810   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:12.856565   48339 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:12.856586   48339 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:15:12.856663   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:12.891591   48339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:15:12.907113   48339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:15:13.031282   48339 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:15:13.038860   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:13.055219   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:13.785959   48339 node_ready.go:35] waiting up to 6m0s for node "functional-767012" to be "Ready" ...
	I1212 00:15:13.786096   48339 type.go:168] "Request Body" body=""
	I1212 00:15:13.786201   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:13.786332   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:13.786513   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:13.786544   48339 retry.go:31] will retry after 252.334378ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:13.786634   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:13.786678   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:13.786692   48339 retry.go:31] will retry after 187.958053ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:13.786725   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:13.975259   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:14.039772   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:14.044477   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:14.044582   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:14.044648   48339 retry.go:31] will retry after 322.190642ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:14.103040   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:14.103100   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:14.103119   48339 retry.go:31] will retry after 449.616448ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:14.286283   48339 type.go:168] "Request Body" body=""
	I1212 00:15:14.286357   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:14.286666   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:14.367911   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:14.423058   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:14.426726   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:14.426805   48339 retry.go:31] will retry after 304.882295ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:14.552989   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:14.624219   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:14.624296   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:14.624324   48339 retry.go:31] will retry after 431.233251ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:14.732500   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:14.787073   48339 type.go:168] "Request Body" body=""
	I1212 00:15:14.787160   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:14.787408   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:14.793570   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:14.793617   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:14.793638   48339 retry.go:31] will retry after 814.242182ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:15.055819   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:15.115988   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:15.119844   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:15.119920   48339 retry.go:31] will retry after 1.173578041s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:15.287015   48339 type.go:168] "Request Body" body=""
	I1212 00:15:15.287127   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:15.287435   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:15.608995   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:15.668352   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:15.672074   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:15.672106   48339 retry.go:31] will retry after 987.735436ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:15.786224   48339 type.go:168] "Request Body" body=""
	I1212 00:15:15.786336   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:15.786676   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:15.786781   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:16.286218   48339 type.go:168] "Request Body" body=""
	I1212 00:15:16.286309   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:16.286618   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:16.293963   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:16.350242   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:16.354044   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:16.354074   48339 retry.go:31] will retry after 1.703488512s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:16.660633   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:16.720806   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:16.720847   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:16.720866   48339 retry.go:31] will retry after 1.717481089s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:16.787045   48339 type.go:168] "Request Body" body=""
	I1212 00:15:16.787165   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:16.787500   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:17.287197   48339 type.go:168] "Request Body" body=""
	I1212 00:15:17.287287   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:17.287663   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:17.786193   48339 type.go:168] "Request Body" body=""
	I1212 00:15:17.786301   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:17.786622   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:18.058032   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:18.119712   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:18.119758   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:18.119777   48339 retry.go:31] will retry after 2.564790813s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:18.286189   48339 type.go:168] "Request Body" body=""
	I1212 00:15:18.286256   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:18.286531   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:18.286571   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:18.438948   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:18.492343   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:18.495818   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:18.495853   48339 retry.go:31] will retry after 3.474173077s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:18.786235   48339 type.go:168] "Request Body" body=""
	I1212 00:15:18.786319   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:18.786633   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:19.286373   48339 type.go:168] "Request Body" body=""
	I1212 00:15:19.286489   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:19.286915   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:19.786192   48339 type.go:168] "Request Body" body=""
	I1212 00:15:19.786262   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:19.786531   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:20.286266   48339 type.go:168] "Request Body" body=""
	I1212 00:15:20.286338   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:20.286671   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:20.286730   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:20.685395   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:20.744336   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:20.744377   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:20.744397   48339 retry.go:31] will retry after 3.068053389s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:20.786556   48339 type.go:168] "Request Body" body=""
	I1212 00:15:20.786632   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:20.787017   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:21.286794   48339 type.go:168] "Request Body" body=""
	I1212 00:15:21.286863   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:21.287178   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:21.786938   48339 type.go:168] "Request Body" body=""
	I1212 00:15:21.787095   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:21.787425   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:21.970778   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:22.029300   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:22.033382   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:22.033416   48339 retry.go:31] will retry after 3.143683139s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:22.286887   48339 type.go:168] "Request Body" body=""
	I1212 00:15:22.286963   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:22.287298   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:22.287349   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:22.786122   48339 type.go:168] "Request Body" body=""
	I1212 00:15:22.786203   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:22.786515   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:23.286522   48339 type.go:168] "Request Body" body=""
	I1212 00:15:23.286595   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:23.286902   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:23.786669   48339 type.go:168] "Request Body" body=""
	I1212 00:15:23.786750   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:23.787071   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:23.813245   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:23.872447   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:23.872484   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:23.872503   48339 retry.go:31] will retry after 4.295118946s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:24.286878   48339 type.go:168] "Request Body" body=""
	I1212 00:15:24.286966   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:24.287236   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:24.787020   48339 type.go:168] "Request Body" body=""
	I1212 00:15:24.787113   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:24.787396   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:24.787455   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:25.178129   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:25.240141   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:25.243777   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:25.243806   48339 retry.go:31] will retry after 9.168145583s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:25.286119   48339 type.go:168] "Request Body" body=""
	I1212 00:15:25.286212   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:25.286559   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:25.787134   48339 type.go:168] "Request Body" body=""
	I1212 00:15:25.787314   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:25.787683   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:26.286268   48339 type.go:168] "Request Body" body=""
	I1212 00:15:26.286357   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:26.286692   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:26.786194   48339 type.go:168] "Request Body" body=""
	I1212 00:15:26.786286   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:26.786601   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:27.286932   48339 type.go:168] "Request Body" body=""
	I1212 00:15:27.287015   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:27.287267   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:27.287315   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:27.787077   48339 type.go:168] "Request Body" body=""
	I1212 00:15:27.787176   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:27.787513   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:28.168008   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:28.231881   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:28.231917   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:28.231944   48339 retry.go:31] will retry after 6.344313185s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:28.286314   48339 type.go:168] "Request Body" body=""
	I1212 00:15:28.286400   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:28.286700   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:28.786192   48339 type.go:168] "Request Body" body=""
	I1212 00:15:28.786267   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:28.786531   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:29.286238   48339 type.go:168] "Request Body" body=""
	I1212 00:15:29.286308   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:29.286623   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:29.786295   48339 type.go:168] "Request Body" body=""
	I1212 00:15:29.786368   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:29.786689   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:29.786753   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:30.287110   48339 type.go:168] "Request Body" body=""
	I1212 00:15:30.287175   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:30.287426   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:30.786872   48339 type.go:168] "Request Body" body=""
	I1212 00:15:30.786960   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:30.787297   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:31.286942   48339 type.go:168] "Request Body" body=""
	I1212 00:15:31.287032   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:31.287368   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:31.786980   48339 type.go:168] "Request Body" body=""
	I1212 00:15:31.787074   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:31.787418   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:31.787478   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:32.286186   48339 type.go:168] "Request Body" body=""
	I1212 00:15:32.286271   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:32.286599   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:32.786430   48339 type.go:168] "Request Body" body=""
	I1212 00:15:32.786534   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:32.786856   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:33.286674   48339 type.go:168] "Request Body" body=""
	I1212 00:15:33.286767   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:33.287049   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:33.786796   48339 type.go:168] "Request Body" body=""
	I1212 00:15:33.786868   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:33.787225   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:34.286903   48339 type.go:168] "Request Body" body=""
	I1212 00:15:34.287005   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:34.287348   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:34.287421   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:34.412873   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:34.471886   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:34.475429   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:34.475459   48339 retry.go:31] will retry after 5.427832253s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:34.576727   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:34.645023   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:34.645064   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:34.645084   48339 retry.go:31] will retry after 14.315988892s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:34.786162   48339 type.go:168] "Request Body" body=""
	I1212 00:15:34.786245   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:34.786506   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:35.286256   48339 type.go:168] "Request Body" body=""
	I1212 00:15:35.286369   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:35.286766   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:35.786480   48339 type.go:168] "Request Body" body=""
	I1212 00:15:35.786551   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:35.786861   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:36.286546   48339 type.go:168] "Request Body" body=""
	I1212 00:15:36.286613   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:36.286890   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:36.786199   48339 type.go:168] "Request Body" body=""
	I1212 00:15:36.786309   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:36.786640   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:36.786704   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:37.286243   48339 type.go:168] "Request Body" body=""
	I1212 00:15:37.286323   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:37.286640   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:37.786331   48339 type.go:168] "Request Body" body=""
	I1212 00:15:37.786426   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:37.786691   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:38.286739   48339 type.go:168] "Request Body" body=""
	I1212 00:15:38.286834   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:38.287212   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:38.787067   48339 type.go:168] "Request Body" body=""
	I1212 00:15:38.787165   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:38.787505   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:38.787556   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:39.286897   48339 type.go:168] "Request Body" body=""
	I1212 00:15:39.286974   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:39.287246   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:39.787072   48339 type.go:168] "Request Body" body=""
	I1212 00:15:39.787155   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:39.787481   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:39.903977   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:39.961517   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:39.961553   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:39.961584   48339 retry.go:31] will retry after 9.825060256s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:40.286904   48339 type.go:168] "Request Body" body=""
	I1212 00:15:40.287016   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:40.287324   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:40.786920   48339 type.go:168] "Request Body" body=""
	I1212 00:15:40.787007   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:40.787265   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:41.287079   48339 type.go:168] "Request Body" body=""
	I1212 00:15:41.287171   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:41.287483   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:41.287535   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:41.786177   48339 type.go:168] "Request Body" body=""
	I1212 00:15:41.786245   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:41.786586   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:42.286210   48339 type.go:168] "Request Body" body=""
	I1212 00:15:42.286304   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:42.286665   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:42.786373   48339 type.go:168] "Request Body" body=""
	I1212 00:15:42.786449   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:42.786735   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:43.286695   48339 type.go:168] "Request Body" body=""
	I1212 00:15:43.286781   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:43.287063   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:43.786792   48339 type.go:168] "Request Body" body=""
	I1212 00:15:43.786867   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:43.787142   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:43.787197   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:44.286976   48339 type.go:168] "Request Body" body=""
	I1212 00:15:44.287083   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:44.287398   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:44.786120   48339 type.go:168] "Request Body" body=""
	I1212 00:15:44.786194   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:44.786513   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:45.286282   48339 type.go:168] "Request Body" body=""
	I1212 00:15:45.286447   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:45.286824   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:45.786533   48339 type.go:168] "Request Body" body=""
	I1212 00:15:45.786632   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:45.786951   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:46.286792   48339 type.go:168] "Request Body" body=""
	I1212 00:15:46.286884   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:46.287186   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:46.287237   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:46.786874   48339 type.go:168] "Request Body" body=""
	I1212 00:15:46.786956   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:46.787268   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:47.287109   48339 type.go:168] "Request Body" body=""
	I1212 00:15:47.287201   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:47.287499   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:47.786233   48339 type.go:168] "Request Body" body=""
	I1212 00:15:47.786303   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:47.786629   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:48.286436   48339 type.go:168] "Request Body" body=""
	I1212 00:15:48.286503   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:48.286772   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:48.786216   48339 type.go:168] "Request Body" body=""
	I1212 00:15:48.786290   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:48.786671   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:48.786725   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:48.962079   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:49.024775   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:49.024824   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:49.024842   48339 retry.go:31] will retry after 15.053349185s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:49.286133   48339 type.go:168] "Request Body" body=""
	I1212 00:15:49.286218   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:49.286771   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:49.786188   48339 type.go:168] "Request Body" body=""
	I1212 00:15:49.786266   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:49.786639   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:49.786790   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:49.873069   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:49.873108   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:49.873126   48339 retry.go:31] will retry after 17.371130847s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:50.286878   48339 type.go:168] "Request Body" body=""
	I1212 00:15:50.286961   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:50.287310   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:50.787122   48339 type.go:168] "Request Body" body=""
	I1212 00:15:50.787202   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:50.787523   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:50.787579   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:51.286912   48339 type.go:168] "Request Body" body=""
	I1212 00:15:51.286981   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:51.287298   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:51.787059   48339 type.go:168] "Request Body" body=""
	I1212 00:15:51.787135   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:51.787456   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:52.286151   48339 type.go:168] "Request Body" body=""
	I1212 00:15:52.286226   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:52.286553   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:52.786336   48339 type.go:168] "Request Body" body=""
	I1212 00:15:52.786407   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:52.786699   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:53.286548   48339 type.go:168] "Request Body" body=""
	I1212 00:15:53.286619   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:53.286939   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:53.287009   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:53.786505   48339 type.go:168] "Request Body" body=""
	I1212 00:15:53.786577   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:53.786912   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:54.286719   48339 type.go:168] "Request Body" body=""
	I1212 00:15:54.286786   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:54.287059   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:54.786836   48339 type.go:168] "Request Body" body=""
	I1212 00:15:54.786933   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:54.787274   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:55.287094   48339 type.go:168] "Request Body" body=""
	I1212 00:15:55.287171   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:55.287511   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:55.287570   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:55.786152   48339 type.go:168] "Request Body" body=""
	I1212 00:15:55.786220   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:55.786474   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:56.286213   48339 type.go:168] "Request Body" body=""
	I1212 00:15:56.286312   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:56.286631   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:56.786168   48339 type.go:168] "Request Body" body=""
	I1212 00:15:56.786239   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:56.786561   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:57.287075   48339 type.go:168] "Request Body" body=""
	I1212 00:15:57.287147   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:57.287400   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:57.787153   48339 type.go:168] "Request Body" body=""
	I1212 00:15:57.787225   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:57.787534   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:57.787585   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:58.286375   48339 type.go:168] "Request Body" body=""
	I1212 00:15:58.286450   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:58.286783   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:58.786215   48339 type.go:168] "Request Body" body=""
	I1212 00:15:58.786282   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:58.786594   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:59.286241   48339 type.go:168] "Request Body" body=""
	I1212 00:15:59.286312   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:59.286622   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:59.786317   48339 type.go:168] "Request Body" body=""
	I1212 00:15:59.786388   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:59.786719   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:00.292274   48339 type.go:168] "Request Body" body=""
	I1212 00:16:00.292358   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:00.292654   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:00.292703   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:00.786205   48339 type.go:168] "Request Body" body=""
	I1212 00:16:00.786286   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:00.786644   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:01.286348   48339 type.go:168] "Request Body" body=""
	I1212 00:16:01.286432   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:01.286773   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:01.787146   48339 type.go:168] "Request Body" body=""
	I1212 00:16:01.787221   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:01.787510   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:02.286209   48339 type.go:168] "Request Body" body=""
	I1212 00:16:02.286300   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:02.286617   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:02.786467   48339 type.go:168] "Request Body" body=""
	I1212 00:16:02.786540   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:02.786883   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:02.786938   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:03.286672   48339 type.go:168] "Request Body" body=""
	I1212 00:16:03.286737   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:03.287012   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:03.786797   48339 type.go:168] "Request Body" body=""
	I1212 00:16:03.786868   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:03.787218   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:04.078782   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:16:04.137731   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:16:04.141181   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:16:04.141215   48339 retry.go:31] will retry after 17.411337884s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:16:04.286486   48339 type.go:168] "Request Body" body=""
	I1212 00:16:04.286564   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:04.286889   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:04.786185   48339 type.go:168] "Request Body" body=""
	I1212 00:16:04.786276   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:04.786662   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:05.286261   48339 type.go:168] "Request Body" body=""
	I1212 00:16:05.286336   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:05.286651   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:05.286703   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:05.786375   48339 type.go:168] "Request Body" body=""
	I1212 00:16:05.786467   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:05.786794   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:06.286188   48339 type.go:168] "Request Body" body=""
	I1212 00:16:06.286265   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:06.286589   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:06.786260   48339 type.go:168] "Request Body" body=""
	I1212 00:16:06.786341   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:06.786641   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:07.245320   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:16:07.286783   48339 type.go:168] "Request Body" body=""
	I1212 00:16:07.286895   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:07.287194   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:07.287250   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:07.304749   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:16:07.304789   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:16:07.304807   48339 retry.go:31] will retry after 24.953429831s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:16:07.787063   48339 type.go:168] "Request Body" body=""
	I1212 00:16:07.787138   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:07.787437   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:08.286404   48339 type.go:168] "Request Body" body=""
	I1212 00:16:08.286476   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:08.286783   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:08.786218   48339 type.go:168] "Request Body" body=""
	I1212 00:16:08.786293   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:08.786671   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:09.286981   48339 type.go:168] "Request Body" body=""
	I1212 00:16:09.287066   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:09.287329   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:09.287373   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:09.787100   48339 type.go:168] "Request Body" body=""
	I1212 00:16:09.787195   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:09.787534   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:10.286235   48339 type.go:168] "Request Body" body=""
	I1212 00:16:10.286321   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:10.286701   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:10.786206   48339 type.go:168] "Request Body" body=""
	I1212 00:16:10.786294   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:10.786608   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:11.286206   48339 type.go:168] "Request Body" body=""
	I1212 00:16:11.286280   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:11.286613   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:11.786187   48339 type.go:168] "Request Body" body=""
	I1212 00:16:11.786279   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:11.786620   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:11.786679   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:12.286942   48339 type.go:168] "Request Body" body=""
	I1212 00:16:12.287031   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:12.287292   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:12.786305   48339 type.go:168] "Request Body" body=""
	I1212 00:16:12.786379   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:12.786714   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:13.286636   48339 type.go:168] "Request Body" body=""
	I1212 00:16:13.286735   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:13.287061   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:13.786837   48339 type.go:168] "Request Body" body=""
	I1212 00:16:13.786905   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:13.787175   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:13.787217   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:14.286785   48339 type.go:168] "Request Body" body=""
	I1212 00:16:14.286860   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:14.287199   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:14.786985   48339 type.go:168] "Request Body" body=""
	I1212 00:16:14.787080   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:14.787391   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:15.287017   48339 type.go:168] "Request Body" body=""
	I1212 00:16:15.287092   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:15.287365   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:15.786104   48339 type.go:168] "Request Body" body=""
	I1212 00:16:15.786194   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:15.786515   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:16.286214   48339 type.go:168] "Request Body" body=""
	I1212 00:16:16.286312   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:16.286611   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:16.286662   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:16.787101   48339 type.go:168] "Request Body" body=""
	I1212 00:16:16.787177   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:16.787436   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:17.286203   48339 type.go:168] "Request Body" body=""
	I1212 00:16:17.286282   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:17.286588   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:17.786199   48339 type.go:168] "Request Body" body=""
	I1212 00:16:17.786273   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:17.786616   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:18.286463   48339 type.go:168] "Request Body" body=""
	I1212 00:16:18.286538   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:18.286889   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:18.286938   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:18.786189   48339 type.go:168] "Request Body" body=""
	I1212 00:16:18.786282   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:18.786626   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:19.286360   48339 type.go:168] "Request Body" body=""
	I1212 00:16:19.286434   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:19.286751   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:19.786162   48339 type.go:168] "Request Body" body=""
	I1212 00:16:19.786239   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:19.786514   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:20.286225   48339 type.go:168] "Request Body" body=""
	I1212 00:16:20.286301   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:20.286620   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:20.786210   48339 type.go:168] "Request Body" body=""
	I1212 00:16:20.786283   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:20.786562   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:20.786610   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:21.286154   48339 type.go:168] "Request Body" body=""
	I1212 00:16:21.286236   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:21.286508   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:21.552920   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:16:21.609312   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:16:21.612881   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:16:21.612910   48339 retry.go:31] will retry after 24.114548677s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:16:21.786128   48339 type.go:168] "Request Body" body=""
	I1212 00:16:21.786221   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:21.786547   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:22.286255   48339 type.go:168] "Request Body" body=""
	I1212 00:16:22.286336   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:22.286677   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:22.786457   48339 type.go:168] "Request Body" body=""
	I1212 00:16:22.786525   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:22.786771   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:22.786820   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:23.286766   48339 type.go:168] "Request Body" body=""
	I1212 00:16:23.286841   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:23.287234   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:23.787069   48339 type.go:168] "Request Body" body=""
	I1212 00:16:23.787143   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:23.787481   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:24.286171   48339 type.go:168] "Request Body" body=""
	I1212 00:16:24.286252   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:24.286533   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:24.786238   48339 type.go:168] "Request Body" body=""
	I1212 00:16:24.786310   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:24.786625   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:25.286352   48339 type.go:168] "Request Body" body=""
	I1212 00:16:25.286433   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:25.286738   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:25.286790   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:25.786139   48339 type.go:168] "Request Body" body=""
	I1212 00:16:25.786227   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:25.786511   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:26.286200   48339 type.go:168] "Request Body" body=""
	I1212 00:16:26.286292   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:26.286614   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:26.786309   48339 type.go:168] "Request Body" body=""
	I1212 00:16:26.786416   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:26.786728   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:27.286245   48339 type.go:168] "Request Body" body=""
	I1212 00:16:27.286316   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:27.286597   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:27.786283   48339 type.go:168] "Request Body" body=""
	I1212 00:16:27.786355   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:27.786690   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:27.786745   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:28.286519   48339 type.go:168] "Request Body" body=""
	I1212 00:16:28.286594   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:28.286931   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:28.786692   48339 type.go:168] "Request Body" body=""
	I1212 00:16:28.786765   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:28.787040   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:29.286807   48339 type.go:168] "Request Body" body=""
	I1212 00:16:29.286879   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:29.287246   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:29.786890   48339 type.go:168] "Request Body" body=""
	I1212 00:16:29.786966   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:29.787276   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:29.787321   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:30.287063   48339 type.go:168] "Request Body" body=""
	I1212 00:16:30.287137   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:30.287393   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:30.787118   48339 type.go:168] "Request Body" body=""
	I1212 00:16:30.787201   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:30.787551   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:31.286150   48339 type.go:168] "Request Body" body=""
	I1212 00:16:31.286271   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:31.286606   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:31.786159   48339 type.go:168] "Request Body" body=""
	I1212 00:16:31.786233   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:31.786502   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:32.259311   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:16:32.286776   48339 type.go:168] "Request Body" body=""
	I1212 00:16:32.286852   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:32.287141   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:32.287191   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:32.315690   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:16:32.319144   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:16:32.319251   48339 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 00:16:32.786146   48339 type.go:168] "Request Body" body=""
	I1212 00:16:32.786230   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:32.786571   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:33.286348   48339 type.go:168] "Request Body" body=""
	I1212 00:16:33.286423   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:33.286668   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:33.786187   48339 type.go:168] "Request Body" body=""
	I1212 00:16:33.786262   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:33.786597   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:34.286351   48339 type.go:168] "Request Body" body=""
	I1212 00:16:34.286425   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:34.286777   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:34.787084   48339 type.go:168] "Request Body" body=""
	I1212 00:16:34.787156   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:34.787405   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:34.787444   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:35.286102   48339 type.go:168] "Request Body" body=""
	I1212 00:16:35.286177   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:35.286533   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:35.786215   48339 type.go:168] "Request Body" body=""
	I1212 00:16:35.786285   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:35.786632   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:36.287087   48339 type.go:168] "Request Body" body=""
	I1212 00:16:36.287160   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:36.287418   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:36.786103   48339 type.go:168] "Request Body" body=""
	I1212 00:16:36.786193   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:36.786526   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:37.286129   48339 type.go:168] "Request Body" body=""
	I1212 00:16:37.286202   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:37.286544   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:37.286600   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:37.787026   48339 type.go:168] "Request Body" body=""
	I1212 00:16:37.787100   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:37.787357   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:38.286531   48339 type.go:168] "Request Body" body=""
	I1212 00:16:38.286611   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:38.286935   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:38.786684   48339 type.go:168] "Request Body" body=""
	I1212 00:16:38.786754   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:38.787096   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:39.286816   48339 type.go:168] "Request Body" body=""
	I1212 00:16:39.286887   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:39.287147   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:39.287187   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:39.786891   48339 type.go:168] "Request Body" body=""
	I1212 00:16:39.786969   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:39.787334   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:40.287018   48339 type.go:168] "Request Body" body=""
	I1212 00:16:40.287113   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:40.287426   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:40.786868   48339 type.go:168] "Request Body" body=""
	I1212 00:16:40.786934   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:40.787251   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:41.287087   48339 type.go:168] "Request Body" body=""
	I1212 00:16:41.287180   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:41.287508   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:41.287561   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:41.786226   48339 type.go:168] "Request Body" body=""
	I1212 00:16:41.786304   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:41.786661   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:42.286381   48339 type.go:168] "Request Body" body=""
	I1212 00:16:42.286463   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:42.286744   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:42.786456   48339 type.go:168] "Request Body" body=""
	I1212 00:16:42.786532   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:42.786873   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:43.286753   48339 type.go:168] "Request Body" body=""
	I1212 00:16:43.286834   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:43.287195   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:43.786972   48339 type.go:168] "Request Body" body=""
	I1212 00:16:43.787061   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:43.787340   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:43.787388   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:44.287150   48339 type.go:168] "Request Body" body=""
	I1212 00:16:44.287228   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:44.287570   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:44.786168   48339 type.go:168] "Request Body" body=""
	I1212 00:16:44.786239   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:44.786580   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:45.286154   48339 type.go:168] "Request Body" body=""
	I1212 00:16:45.286221   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:45.286507   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:45.728277   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:16:45.786458   48339 type.go:168] "Request Body" body=""
	I1212 00:16:45.786536   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:45.786800   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:45.788347   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:16:45.788381   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:16:45.788458   48339 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 00:16:45.791789   48339 out.go:179] * Enabled addons: 
	I1212 00:16:45.795459   48339 addons.go:530] duration metric: took 1m33.015656607s for enable addons: enabled=[]
	I1212 00:16:46.287010   48339 type.go:168] "Request Body" body=""
	I1212 00:16:46.287081   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:46.287404   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:46.287462   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:46.786103   48339 type.go:168] "Request Body" body=""
	I1212 00:16:46.786175   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:46.786467   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:47.286190   48339 type.go:168] "Request Body" body=""
	I1212 00:16:47.286259   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:47.286575   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:47.786211   48339 type.go:168] "Request Body" body=""
	I1212 00:16:47.786307   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:47.786638   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:48.286474   48339 type.go:168] "Request Body" body=""
	I1212 00:16:48.286546   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:48.286806   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:48.786468   48339 type.go:168] "Request Body" body=""
	I1212 00:16:48.786549   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:48.786891   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:48.786943   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:49.286477   48339 type.go:168] "Request Body" body=""
	I1212 00:16:49.286551   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:49.286848   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:49.786149   48339 type.go:168] "Request Body" body=""
	I1212 00:16:49.786225   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:49.786558   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:50.286220   48339 type.go:168] "Request Body" body=""
	I1212 00:16:50.286298   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:50.286632   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:50.786373   48339 type.go:168] "Request Body" body=""
	I1212 00:16:50.786482   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:50.786811   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:51.287106   48339 type.go:168] "Request Body" body=""
	I1212 00:16:51.287186   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:51.287452   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:51.287504   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:51.786168   48339 type.go:168] "Request Body" body=""
	I1212 00:16:51.786246   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:51.786652   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:52.286230   48339 type.go:168] "Request Body" body=""
	I1212 00:16:52.286316   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:52.286605   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:52.786447   48339 type.go:168] "Request Body" body=""
	I1212 00:16:52.786524   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:52.786794   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:53.286795   48339 type.go:168] "Request Body" body=""
	I1212 00:16:53.286881   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:53.287250   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:53.786893   48339 type.go:168] "Request Body" body=""
	I1212 00:16:53.786965   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:53.787310   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:53.787368   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:54.287009   48339 type.go:168] "Request Body" body=""
	I1212 00:16:54.287074   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:54.287399   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:54.787136   48339 type.go:168] "Request Body" body=""
	I1212 00:16:54.787210   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:54.787556   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:55.286167   48339 type.go:168] "Request Body" body=""
	I1212 00:16:55.286259   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:55.286627   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:55.787083   48339 type.go:168] "Request Body" body=""
	I1212 00:16:55.787159   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:55.787400   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:55.787438   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:56.286110   48339 type.go:168] "Request Body" body=""
	I1212 00:16:56.286192   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:56.286559   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:56.786120   48339 type.go:168] "Request Body" body=""
	I1212 00:16:56.786194   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:56.786507   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:57.286159   48339 type.go:168] "Request Body" body=""
	I1212 00:16:57.286235   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:57.286541   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:57.786199   48339 type.go:168] "Request Body" body=""
	I1212 00:16:57.786273   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:57.786608   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:58.286384   48339 type.go:168] "Request Body" body=""
	I1212 00:16:58.286456   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:58.286786   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:58.286842   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:58.786113   48339 type.go:168] "Request Body" body=""
	I1212 00:16:58.786195   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:58.786436   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:59.286107   48339 type.go:168] "Request Body" body=""
	I1212 00:16:59.286184   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:59.286539   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:59.786120   48339 type.go:168] "Request Body" body=""
	I1212 00:16:59.786208   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:59.786557   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:00.301305   48339 type.go:168] "Request Body" body=""
	I1212 00:17:00.301394   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:00.301705   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:00.301755   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:00.786915   48339 type.go:168] "Request Body" body=""
	I1212 00:17:00.787023   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:00.787365   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:01.286116   48339 type.go:168] "Request Body" body=""
	I1212 00:17:01.286201   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:01.286498   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:01.786331   48339 type.go:168] "Request Body" body=""
	I1212 00:17:01.786455   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:01.787063   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:02.286594   48339 type.go:168] "Request Body" body=""
	I1212 00:17:02.286683   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:02.287073   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:02.786476   48339 type.go:168] "Request Body" body=""
	I1212 00:17:02.786554   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:02.786843   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:02.786901   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:03.286851   48339 type.go:168] "Request Body" body=""
	I1212 00:17:03.286949   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:03.287380   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:03.787098   48339 type.go:168] "Request Body" body=""
	I1212 00:17:03.787174   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:03.787557   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:04.286231   48339 type.go:168] "Request Body" body=""
	I1212 00:17:04.286326   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:04.286645   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:04.786397   48339 type.go:168] "Request Body" body=""
	I1212 00:17:04.786491   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:04.786849   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:05.286553   48339 type.go:168] "Request Body" body=""
	I1212 00:17:05.286637   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:05.286984   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:05.287068   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:05.786188   48339 type.go:168] "Request Body" body=""
	I1212 00:17:05.786271   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:05.786551   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:06.286285   48339 type.go:168] "Request Body" body=""
	I1212 00:17:06.286367   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:06.286754   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:06.786511   48339 type.go:168] "Request Body" body=""
	I1212 00:17:06.786601   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:06.786964   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:07.286708   48339 type.go:168] "Request Body" body=""
	I1212 00:17:07.286779   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:07.287068   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:07.287118   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:07.786821   48339 type.go:168] "Request Body" body=""
	I1212 00:17:07.786901   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:07.787214   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:08.286844   48339 type.go:168] "Request Body" body=""
	I1212 00:17:08.286917   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:08.288380   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1212 00:17:08.786934   48339 type.go:168] "Request Body" body=""
	I1212 00:17:08.787026   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:08.787269   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:09.287048   48339 type.go:168] "Request Body" body=""
	I1212 00:17:09.287121   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:09.287442   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:09.287495   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:09.786147   48339 type.go:168] "Request Body" body=""
	I1212 00:17:09.786230   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:09.786586   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:10.286883   48339 type.go:168] "Request Body" body=""
	I1212 00:17:10.286956   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:10.287243   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:10.786958   48339 type.go:168] "Request Body" body=""
	I1212 00:17:10.787045   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:10.787380   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:11.287044   48339 type.go:168] "Request Body" body=""
	I1212 00:17:11.287119   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:11.287444   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:11.786125   48339 type.go:168] "Request Body" body=""
	I1212 00:17:11.786193   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:11.786444   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:11.786489   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:12.286149   48339 type.go:168] "Request Body" body=""
	I1212 00:17:12.286229   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:12.286580   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:12.786352   48339 type.go:168] "Request Body" body=""
	I1212 00:17:12.786428   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:12.786688   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:13.286596   48339 type.go:168] "Request Body" body=""
	I1212 00:17:13.286663   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:13.286919   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:13.786173   48339 type.go:168] "Request Body" body=""
	I1212 00:17:13.786241   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:13.786564   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:13.786616   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:14.286270   48339 type.go:168] "Request Body" body=""
	I1212 00:17:14.286348   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:14.286675   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:14.786352   48339 type.go:168] "Request Body" body=""
	I1212 00:17:14.786428   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:14.786687   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:15.286227   48339 type.go:168] "Request Body" body=""
	I1212 00:17:15.286303   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:15.286628   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:15.786172   48339 type.go:168] "Request Body" body=""
	I1212 00:17:15.786249   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:15.786573   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:16.286101   48339 type.go:168] "Request Body" body=""
	I1212 00:17:16.286166   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:16.286405   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:16.286442   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:16.786141   48339 type.go:168] "Request Body" body=""
	I1212 00:17:16.786209   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:16.786499   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:17.286249   48339 type.go:168] "Request Body" body=""
	I1212 00:17:17.286330   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:17.286684   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:17.786983   48339 type.go:168] "Request Body" body=""
	I1212 00:17:17.787073   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:17.787361   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:18.286161   48339 type.go:168] "Request Body" body=""
	I1212 00:17:18.286235   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:18.286595   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:18.286655   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:18.786181   48339 type.go:168] "Request Body" body=""
	I1212 00:17:18.786265   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:18.786618   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:19.286155   48339 type.go:168] "Request Body" body=""
	I1212 00:17:19.286234   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:19.286527   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:19.786171   48339 type.go:168] "Request Body" body=""
	I1212 00:17:19.786268   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:19.786580   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:20.286265   48339 type.go:168] "Request Body" body=""
	I1212 00:17:20.286345   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:20.286667   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:20.286729   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:20.786253   48339 type.go:168] "Request Body" body=""
	I1212 00:17:20.786335   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:20.786585   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:21.286248   48339 type.go:168] "Request Body" body=""
	I1212 00:17:21.286349   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:21.286645   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:21.786360   48339 type.go:168] "Request Body" body=""
	I1212 00:17:21.786432   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:21.786770   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:22.286447   48339 type.go:168] "Request Body" body=""
	I1212 00:17:22.286522   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:22.286821   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:22.286872   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:22.786477   48339 type.go:168] "Request Body" body=""
	I1212 00:17:22.786551   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:22.786870   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:23.286630   48339 type.go:168] "Request Body" body=""
	I1212 00:17:23.286708   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:23.287045   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:23.786799   48339 type.go:168] "Request Body" body=""
	I1212 00:17:23.786866   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:23.787137   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:24.286984   48339 type.go:168] "Request Body" body=""
	I1212 00:17:24.287110   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:24.287379   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:24.287422   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:24.787166   48339 type.go:168] "Request Body" body=""
	I1212 00:17:24.787236   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:24.787551   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:25.286131   48339 type.go:168] "Request Body" body=""
	I1212 00:17:25.286198   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:25.286515   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:25.786175   48339 type.go:168] "Request Body" body=""
	I1212 00:17:25.786258   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:25.786585   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:26.286285   48339 type.go:168] "Request Body" body=""
	I1212 00:17:26.286371   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:26.286713   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:26.786399   48339 type.go:168] "Request Body" body=""
	I1212 00:17:26.786473   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:26.786722   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:26.786769   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:27.286222   48339 type.go:168] "Request Body" body=""
	I1212 00:17:27.286300   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:27.286683   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:27.786241   48339 type.go:168] "Request Body" body=""
	I1212 00:17:27.786319   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:27.786666   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:28.286480   48339 type.go:168] "Request Body" body=""
	I1212 00:17:28.286553   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:28.286814   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:28.786177   48339 type.go:168] "Request Body" body=""
	I1212 00:17:28.786245   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:28.786593   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:29.286294   48339 type.go:168] "Request Body" body=""
	I1212 00:17:29.286373   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:29.286698   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:29.286749   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:29.787059   48339 type.go:168] "Request Body" body=""
	I1212 00:17:29.787135   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:29.787388   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:30.287153   48339 type.go:168] "Request Body" body=""
	I1212 00:17:30.287233   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:30.287571   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:30.786130   48339 type.go:168] "Request Body" body=""
	I1212 00:17:30.786208   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:30.786533   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:31.286233   48339 type.go:168] "Request Body" body=""
	I1212 00:17:31.286304   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:31.286552   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:31.786198   48339 type.go:168] "Request Body" body=""
	I1212 00:17:31.786272   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:31.786658   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:31.786711   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:32.286228   48339 type.go:168] "Request Body" body=""
	I1212 00:17:32.286302   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:32.286672   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:32.786431   48339 type.go:168] "Request Body" body=""
	I1212 00:17:32.786501   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:32.786749   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:33.286661   48339 type.go:168] "Request Body" body=""
	I1212 00:17:33.286739   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:33.287070   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:33.786835   48339 type.go:168] "Request Body" body=""
	I1212 00:17:33.786916   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:33.787267   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:33.787323   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:34.287052   48339 type.go:168] "Request Body" body=""
	I1212 00:17:34.287118   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:34.287368   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:34.787078   48339 type.go:168] "Request Body" body=""
	I1212 00:17:34.787151   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:34.787466   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:35.286168   48339 type.go:168] "Request Body" body=""
	I1212 00:17:35.286247   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:35.286575   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:35.787150   48339 type.go:168] "Request Body" body=""
	I1212 00:17:35.787215   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:35.787459   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:35.787500   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:36.286166   48339 type.go:168] "Request Body" body=""
	I1212 00:17:36.286238   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:36.286556   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:36.786086   48339 type.go:168] "Request Body" body=""
	I1212 00:17:36.786158   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:36.786493   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:37.287083   48339 type.go:168] "Request Body" body=""
	I1212 00:17:37.287149   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:37.287400   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:37.786113   48339 type.go:168] "Request Body" body=""
	I1212 00:17:37.786187   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:37.786493   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:38.286449   48339 type.go:168] "Request Body" body=""
	I1212 00:17:38.286532   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:38.286863   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:38.286918   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:38.786428   48339 type.go:168] "Request Body" body=""
	I1212 00:17:38.786493   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:38.786739   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:39.286231   48339 type.go:168] "Request Body" body=""
	I1212 00:17:39.286328   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:39.286669   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:39.786191   48339 type.go:168] "Request Body" body=""
	I1212 00:17:39.786261   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:39.786574   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:40.286097   48339 type.go:168] "Request Body" body=""
	I1212 00:17:40.286176   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:40.286475   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:40.786241   48339 type.go:168] "Request Body" body=""
	I1212 00:17:40.786314   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:40.786667   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:40.786722   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:41.286407   48339 type.go:168] "Request Body" body=""
	I1212 00:17:41.286483   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:41.286793   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:41.786385   48339 type.go:168] "Request Body" body=""
	I1212 00:17:41.786504   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:41.786782   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:42.286235   48339 type.go:168] "Request Body" body=""
	I1212 00:17:42.286314   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:42.286656   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:42.786517   48339 type.go:168] "Request Body" body=""
	I1212 00:17:42.786601   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:42.786955   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:42.787030   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:43.286740   48339 type.go:168] "Request Body" body=""
	I1212 00:17:43.286811   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:43.287101   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:43.786897   48339 type.go:168] "Request Body" body=""
	I1212 00:17:43.786970   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:43.787283   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:44.287080   48339 type.go:168] "Request Body" body=""
	I1212 00:17:44.287151   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:44.287449   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:44.786108   48339 type.go:168] "Request Body" body=""
	I1212 00:17:44.786188   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:44.786505   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:45.286236   48339 type.go:168] "Request Body" body=""
	I1212 00:17:45.286337   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:45.286642   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:45.286697   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:45.786185   48339 type.go:168] "Request Body" body=""
	I1212 00:17:45.786259   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:45.786591   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:46.286218   48339 type.go:168] "Request Body" body=""
	I1212 00:17:46.286326   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:46.286644   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:46.786217   48339 type.go:168] "Request Body" body=""
	I1212 00:17:46.786289   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:46.786616   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:47.286247   48339 type.go:168] "Request Body" body=""
	I1212 00:17:47.286340   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:47.286709   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:47.286768   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:47.786186   48339 type.go:168] "Request Body" body=""
	I1212 00:17:47.786263   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:47.786590   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:48.286576   48339 type.go:168] "Request Body" body=""
	I1212 00:17:48.286657   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:48.287040   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:48.786801   48339 type.go:168] "Request Body" body=""
	I1212 00:17:48.786875   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:48.787271   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:49.287049   48339 type.go:168] "Request Body" body=""
	I1212 00:17:49.287121   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:49.287376   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:49.287415   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:49.786455   48339 type.go:168] "Request Body" body=""
	I1212 00:17:49.786542   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:49.786946   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:50.286236   48339 type.go:168] "Request Body" body=""
	I1212 00:17:50.286337   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:50.286768   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:50.787077   48339 type.go:168] "Request Body" body=""
	I1212 00:17:50.787161   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:50.787441   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:51.286171   48339 type.go:168] "Request Body" body=""
	I1212 00:17:51.286244   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:51.286582   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:51.786669   48339 type.go:168] "Request Body" body=""
	I1212 00:17:51.786740   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:51.787072   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:51.787128   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:52.286721   48339 type.go:168] "Request Body" body=""
	I1212 00:17:52.286792   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:52.287074   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:52.787058   48339 type.go:168] "Request Body" body=""
	I1212 00:17:52.787135   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:52.787466   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:53.286395   48339 type.go:168] "Request Body" body=""
	I1212 00:17:53.286475   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:53.286789   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:53.786172   48339 type.go:168] "Request Body" body=""
	I1212 00:17:53.786242   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:53.786578   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:54.286214   48339 type.go:168] "Request Body" body=""
	I1212 00:17:54.286284   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:54.286634   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:54.286688   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:54.786336   48339 type.go:168] "Request Body" body=""
	I1212 00:17:54.786415   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:54.786747   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:55.287099   48339 type.go:168] "Request Body" body=""
	I1212 00:17:55.287165   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:55.287421   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:55.787184   48339 type.go:168] "Request Body" body=""
	I1212 00:17:55.787260   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:55.787579   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:56.286208   48339 type.go:168] "Request Body" body=""
	I1212 00:17:56.286283   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:56.286616   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:56.786874   48339 type.go:168] "Request Body" body=""
	I1212 00:17:56.786946   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:56.787207   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:56.787260   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:57.286785   48339 type.go:168] "Request Body" body=""
	I1212 00:17:57.286872   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:57.287249   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:57.786907   48339 type.go:168] "Request Body" body=""
	I1212 00:17:57.786979   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:57.787325   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:58.287084   48339 type.go:168] "Request Body" body=""
	I1212 00:17:58.287156   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:58.287408   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:58.787171   48339 type.go:168] "Request Body" body=""
	I1212 00:17:58.787247   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:58.787569   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:58.787624   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:59.286220   48339 type.go:168] "Request Body" body=""
	I1212 00:17:59.286295   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:59.286637   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:59.786160   48339 type.go:168] "Request Body" body=""
	I1212 00:17:59.786226   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:59.786481   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:00.286342   48339 type.go:168] "Request Body" body=""
	I1212 00:18:00.286424   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:00.286745   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:00.786411   48339 type.go:168] "Request Body" body=""
	I1212 00:18:00.786487   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:00.786799   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:01.286485   48339 type.go:168] "Request Body" body=""
	I1212 00:18:01.286554   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:01.286822   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:01.286864   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:01.786179   48339 type.go:168] "Request Body" body=""
	I1212 00:18:01.786249   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:01.786559   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:02.286272   48339 type.go:168] "Request Body" body=""
	I1212 00:18:02.286352   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:02.286681   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:02.786397   48339 type.go:168] "Request Body" body=""
	I1212 00:18:02.786473   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:02.786729   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:03.286684   48339 type.go:168] "Request Body" body=""
	I1212 00:18:03.286756   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:03.287062   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:03.287108   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:03.786757   48339 type.go:168] "Request Body" body=""
	I1212 00:18:03.786848   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:03.787220   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:04.286890   48339 type.go:168] "Request Body" body=""
	I1212 00:18:04.286971   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:04.287276   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:04.787022   48339 type.go:168] "Request Body" body=""
	I1212 00:18:04.787101   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:04.787413   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:05.286164   48339 type.go:168] "Request Body" body=""
	I1212 00:18:05.286245   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:05.286589   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:05.786893   48339 type.go:168] "Request Body" body=""
	I1212 00:18:05.786965   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:05.787232   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:05.787272   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:06.287096   48339 type.go:168] "Request Body" body=""
	I1212 00:18:06.287189   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:06.287596   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:06.786293   48339 type.go:168] "Request Body" body=""
	I1212 00:18:06.786366   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:06.786687   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:07.286878   48339 type.go:168] "Request Body" body=""
	I1212 00:18:07.286943   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:07.287205   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:07.786915   48339 type.go:168] "Request Body" body=""
	I1212 00:18:07.786985   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:07.787328   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:07.787380   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:08.286823   48339 type.go:168] "Request Body" body=""
	I1212 00:18:08.286912   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:08.287273   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:08.786885   48339 type.go:168] "Request Body" body=""
	I1212 00:18:08.786957   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:08.787238   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:09.286257   48339 type.go:168] "Request Body" body=""
	I1212 00:18:09.286349   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:09.286656   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:09.786355   48339 type.go:168] "Request Body" body=""
	I1212 00:18:09.786440   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:09.786773   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:10.286467   48339 type.go:168] "Request Body" body=""
	I1212 00:18:10.286571   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:10.286828   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:10.286869   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:10.786201   48339 type.go:168] "Request Body" body=""
	I1212 00:18:10.786280   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:10.786615   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:11.286318   48339 type.go:168] "Request Body" body=""
	I1212 00:18:11.286395   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:11.286719   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:11.786408   48339 type.go:168] "Request Body" body=""
	I1212 00:18:11.786479   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:11.786752   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:12.286228   48339 type.go:168] "Request Body" body=""
	I1212 00:18:12.286305   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:12.286693   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:12.786445   48339 type.go:168] "Request Body" body=""
	I1212 00:18:12.786529   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:12.786847   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:12.786901   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:13.286863   48339 type.go:168] "Request Body" body=""
	I1212 00:18:13.286936   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:13.287242   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:13.786941   48339 type.go:168] "Request Body" body=""
	I1212 00:18:13.787040   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:13.787410   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:14.287037   48339 type.go:168] "Request Body" body=""
	I1212 00:18:14.287114   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:14.287432   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:14.786139   48339 type.go:168] "Request Body" body=""
	I1212 00:18:14.786211   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:14.786471   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:15.286165   48339 type.go:168] "Request Body" body=""
	I1212 00:18:15.286243   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:15.286559   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:15.286619   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:15.786275   48339 type.go:168] "Request Body" body=""
	I1212 00:18:15.786355   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:15.786707   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:16.286358   48339 type.go:168] "Request Body" body=""
	I1212 00:18:16.286435   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:16.286754   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:16.786213   48339 type.go:168] "Request Body" body=""
	I1212 00:18:16.786285   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:16.786626   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:17.286235   48339 type.go:168] "Request Body" body=""
	I1212 00:18:17.286316   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:17.286711   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:17.286765   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:17.786210   48339 type.go:168] "Request Body" body=""
	I1212 00:18:17.786299   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:17.786594   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:18.286667   48339 type.go:168] "Request Body" body=""
	I1212 00:18:18.286745   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:18.287093   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:18.786871   48339 type.go:168] "Request Body" body=""
	I1212 00:18:18.786957   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:18.787347   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:19.287118   48339 type.go:168] "Request Body" body=""
	I1212 00:18:19.287189   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:19.287538   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:19.287598   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:19.786173   48339 type.go:168] "Request Body" body=""
	I1212 00:18:19.786250   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:19.786591   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:20.286290   48339 type.go:168] "Request Body" body=""
	I1212 00:18:20.286368   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:20.286732   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:20.786421   48339 type.go:168] "Request Body" body=""
	I1212 00:18:20.786496   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:20.786769   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:21.286229   48339 type.go:168] "Request Body" body=""
	I1212 00:18:21.286299   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:21.286631   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:21.786238   48339 type.go:168] "Request Body" body=""
	I1212 00:18:21.786325   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:21.786704   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:21.786756   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:22.286201   48339 type.go:168] "Request Body" body=""
	I1212 00:18:22.286267   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:22.286513   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:22.786439   48339 type.go:168] "Request Body" body=""
	I1212 00:18:22.786511   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:22.786820   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:23.286747   48339 type.go:168] "Request Body" body=""
	I1212 00:18:23.286828   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:23.287136   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:23.786886   48339 type.go:168] "Request Body" body=""
	I1212 00:18:23.786958   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:23.787219   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:23.787272   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:24.287069   48339 type.go:168] "Request Body" body=""
	I1212 00:18:24.287145   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:24.287464   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:24.787101   48339 type.go:168] "Request Body" body=""
	I1212 00:18:24.787205   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:24.787503   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:25.286157   48339 type.go:168] "Request Body" body=""
	I1212 00:18:25.286231   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:25.286484   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:25.786179   48339 type.go:168] "Request Body" body=""
	I1212 00:18:25.786250   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:25.786581   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:26.286254   48339 type.go:168] "Request Body" body=""
	I1212 00:18:26.286329   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:26.286638   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:26.286693   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:26.786132   48339 type.go:168] "Request Body" body=""
	I1212 00:18:26.786199   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:26.786452   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:27.286167   48339 type.go:168] "Request Body" body=""
	I1212 00:18:27.286240   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:27.286520   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:27.786148   48339 type.go:168] "Request Body" body=""
	I1212 00:18:27.786250   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:27.786565   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:28.286477   48339 type.go:168] "Request Body" body=""
	I1212 00:18:28.286544   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:28.286801   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:28.286842   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:28.786195   48339 type.go:168] "Request Body" body=""
	I1212 00:18:28.786269   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:28.786563   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:29.286151   48339 type.go:168] "Request Body" body=""
	I1212 00:18:29.286228   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:29.286541   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:29.786783   48339 type.go:168] "Request Body" body=""
	I1212 00:18:29.786859   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:29.787122   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:30.286875   48339 type.go:168] "Request Body" body=""
	I1212 00:18:30.286953   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:30.287291   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:30.287342   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:30.786921   48339 type.go:168] "Request Body" body=""
	I1212 00:18:30.787054   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:30.787386   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:31.287040   48339 type.go:168] "Request Body" body=""
	I1212 00:18:31.287113   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:31.287420   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:31.786111   48339 type.go:168] "Request Body" body=""
	I1212 00:18:31.786190   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:31.786534   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:32.286242   48339 type.go:168] "Request Body" body=""
	I1212 00:18:32.286317   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:32.286644   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:32.787099   48339 type.go:168] "Request Body" body=""
	I1212 00:18:32.787169   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:32.787444   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:32.787485   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:33.286455   48339 type.go:168] "Request Body" body=""
	I1212 00:18:33.286531   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:33.286867   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:33.786185   48339 type.go:168] "Request Body" body=""
	I1212 00:18:33.786263   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:33.786599   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:34.287030   48339 type.go:168] "Request Body" body=""
	I1212 00:18:34.287101   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:34.287356   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:34.787107   48339 type.go:168] "Request Body" body=""
	I1212 00:18:34.787178   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:34.787462   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:34.787506   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:35.287151   48339 type.go:168] "Request Body" body=""
	I1212 00:18:35.287227   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:35.287561   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:35.786156   48339 type.go:168] "Request Body" body=""
	I1212 00:18:35.786227   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:35.786476   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:36.286225   48339 type.go:168] "Request Body" body=""
	I1212 00:18:36.286302   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:36.286658   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:36.786364   48339 type.go:168] "Request Body" body=""
	I1212 00:18:36.786441   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:36.786776   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:37.287081   48339 type.go:168] "Request Body" body=""
	I1212 00:18:37.287160   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:37.287429   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:37.287479   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:37.786116   48339 type.go:168] "Request Body" body=""
	I1212 00:18:37.786189   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:37.786493   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:38.286438   48339 type.go:168] "Request Body" body=""
	I1212 00:18:38.286517   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:38.286835   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:38.786180   48339 type.go:168] "Request Body" body=""
	I1212 00:18:38.786274   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:38.786571   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:39.286206   48339 type.go:168] "Request Body" body=""
	I1212 00:18:39.286282   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:39.286612   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:39.786192   48339 type.go:168] "Request Body" body=""
	I1212 00:18:39.786279   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:39.786630   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:39.786682   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:40.286354   48339 type.go:168] "Request Body" body=""
	I1212 00:18:40.286444   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:40.286835   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:40.786204   48339 type.go:168] "Request Body" body=""
	I1212 00:18:40.786287   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:40.786601   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:41.286231   48339 type.go:168] "Request Body" body=""
	I1212 00:18:41.286307   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:41.286653   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:41.786899   48339 type.go:168] "Request Body" body=""
	I1212 00:18:41.787023   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:41.787291   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:41.787331   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:42.287103   48339 type.go:168] "Request Body" body=""
	I1212 00:18:42.287183   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:42.287534   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:42.786413   48339 type.go:168] "Request Body" body=""
	I1212 00:18:42.786496   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:42.786838   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:43.286712   48339 type.go:168] "Request Body" body=""
	I1212 00:18:43.286788   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:43.287076   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:43.786845   48339 type.go:168] "Request Body" body=""
	I1212 00:18:43.786921   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:43.787255   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:44.287058   48339 type.go:168] "Request Body" body=""
	I1212 00:18:44.287145   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:44.287474   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:44.287531   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:44.786152   48339 type.go:168] "Request Body" body=""
	I1212 00:18:44.786226   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:44.786558   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:45.286226   48339 type.go:168] "Request Body" body=""
	I1212 00:18:45.286300   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:45.286609   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:45.786194   48339 type.go:168] "Request Body" body=""
	I1212 00:18:45.786265   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:45.786613   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:46.287075   48339 type.go:168] "Request Body" body=""
	I1212 00:18:46.287143   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:46.287427   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:46.786109   48339 type.go:168] "Request Body" body=""
	I1212 00:18:46.786181   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:46.786497   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:46.786555   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:47.286231   48339 type.go:168] "Request Body" body=""
	I1212 00:18:47.286325   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:47.286637   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:47.786326   48339 type.go:168] "Request Body" body=""
	I1212 00:18:47.786398   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:47.786701   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:48.286663   48339 type.go:168] "Request Body" body=""
	I1212 00:18:48.286736   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:48.287070   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:48.786872   48339 type.go:168] "Request Body" body=""
	I1212 00:18:48.786951   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:48.787298   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:48.787351   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:49.287060   48339 type.go:168] "Request Body" body=""
	I1212 00:18:49.287138   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:49.287405   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:49.786097   48339 type.go:168] "Request Body" body=""
	I1212 00:18:49.786175   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:49.786470   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:50.286223   48339 type.go:168] "Request Body" body=""
	I1212 00:18:50.286298   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:50.286634   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:50.786914   48339 type.go:168] "Request Body" body=""
	I1212 00:18:50.786986   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:50.787320   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:50.787380   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:51.287127   48339 type.go:168] "Request Body" body=""
	I1212 00:18:51.287204   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:51.287530   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:51.786096   48339 type.go:168] "Request Body" body=""
	I1212 00:18:51.786170   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:51.786513   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:52.286949   48339 type.go:168] "Request Body" body=""
	I1212 00:18:52.287031   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:52.287290   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:52.786331   48339 type.go:168] "Request Body" body=""
	I1212 00:18:52.786411   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:52.786755   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:53.286620   48339 type.go:168] "Request Body" body=""
	I1212 00:18:53.286694   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:53.287034   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:53.287095   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:53.786805   48339 type.go:168] "Request Body" body=""
	I1212 00:18:53.786875   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:53.787154   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:54.286914   48339 type.go:168] "Request Body" body=""
	I1212 00:18:54.286986   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:54.287311   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:54.787067   48339 type.go:168] "Request Body" body=""
	I1212 00:18:54.787140   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:54.787481   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:55.287095   48339 type.go:168] "Request Body" body=""
	I1212 00:18:55.287162   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:55.287415   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:55.287454   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:55.786091   48339 type.go:168] "Request Body" body=""
	I1212 00:18:55.786159   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:55.786468   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:56.286160   48339 type.go:168] "Request Body" body=""
	I1212 00:18:56.286232   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:56.286551   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:56.786800   48339 type.go:168] "Request Body" body=""
	I1212 00:18:56.786866   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:56.787137   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:57.286892   48339 type.go:168] "Request Body" body=""
	I1212 00:18:57.286971   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:57.287328   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:57.787142   48339 type.go:168] "Request Body" body=""
	I1212 00:18:57.787233   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:57.787583   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:57.787634   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:58.286388   48339 type.go:168] "Request Body" body=""
	I1212 00:18:58.286461   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:58.286718   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:58.786377   48339 type.go:168] "Request Body" body=""
	I1212 00:18:58.786448   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:58.786805   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:59.286505   48339 type.go:168] "Request Body" body=""
	I1212 00:18:59.286587   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:59.286890   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:59.786144   48339 type.go:168] "Request Body" body=""
	I1212 00:18:59.786225   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:59.786592   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:00.286264   48339 type.go:168] "Request Body" body=""
	I1212 00:19:00.286345   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:00.286656   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:00.286735   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:00.786383   48339 type.go:168] "Request Body" body=""
	I1212 00:19:00.786458   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:00.786791   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:01.286179   48339 type.go:168] "Request Body" body=""
	I1212 00:19:01.286250   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:01.286584   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:01.786253   48339 type.go:168] "Request Body" body=""
	I1212 00:19:01.786329   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:01.786671   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:02.286241   48339 type.go:168] "Request Body" body=""
	I1212 00:19:02.286317   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:02.286672   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:02.786408   48339 type.go:168] "Request Body" body=""
	I1212 00:19:02.786476   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:02.786723   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:02.786763   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:03.286700   48339 type.go:168] "Request Body" body=""
	I1212 00:19:03.286795   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:03.287188   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:03.787019   48339 type.go:168] "Request Body" body=""
	I1212 00:19:03.787097   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:03.787433   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:04.286096   48339 type.go:168] "Request Body" body=""
	I1212 00:19:04.286175   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:04.286490   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:04.786196   48339 type.go:168] "Request Body" body=""
	I1212 00:19:04.786274   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:04.786601   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:05.286296   48339 type.go:168] "Request Body" body=""
	I1212 00:19:05.286371   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:05.286696   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:05.286753   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:05.786175   48339 type.go:168] "Request Body" body=""
	I1212 00:19:05.786254   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:05.786601   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:06.286229   48339 type.go:168] "Request Body" body=""
	I1212 00:19:06.286306   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:06.286600   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:06.786191   48339 type.go:168] "Request Body" body=""
	I1212 00:19:06.786264   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:06.786580   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:07.286119   48339 type.go:168] "Request Body" body=""
	I1212 00:19:07.286199   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:07.286473   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:07.786186   48339 type.go:168] "Request Body" body=""
	I1212 00:19:07.786259   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:07.786536   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:07.786581   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:08.286383   48339 type.go:168] "Request Body" body=""
	I1212 00:19:08.286463   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:08.286917   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:08.786174   48339 type.go:168] "Request Body" body=""
	I1212 00:19:08.786248   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:08.786551   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:09.286220   48339 type.go:168] "Request Body" body=""
	I1212 00:19:09.286299   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:09.286639   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:09.786169   48339 type.go:168] "Request Body" body=""
	I1212 00:19:09.786240   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:09.786540   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:10.286846   48339 type.go:168] "Request Body" body=""
	I1212 00:19:10.286915   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:10.287189   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:10.287228   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:10.787020   48339 type.go:168] "Request Body" body=""
	I1212 00:19:10.787096   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:10.787416   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:11.286118   48339 type.go:168] "Request Body" body=""
	I1212 00:19:11.286193   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:11.286517   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:11.786150   48339 type.go:168] "Request Body" body=""
	I1212 00:19:11.786231   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:11.786516   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:12.286197   48339 type.go:168] "Request Body" body=""
	I1212 00:19:12.286271   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:12.286598   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:12.786359   48339 type.go:168] "Request Body" body=""
	I1212 00:19:12.786434   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:12.786739   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:12.786787   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:13.286561   48339 type.go:168] "Request Body" body=""
	I1212 00:19:13.286637   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:13.286885   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:13.786215   48339 type.go:168] "Request Body" body=""
	I1212 00:19:13.786291   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:13.786637   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:14.286215   48339 type.go:168] "Request Body" body=""
	I1212 00:19:14.286287   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:14.286589   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:14.786851   48339 type.go:168] "Request Body" body=""
	I1212 00:19:14.786918   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:14.787262   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:14.787320   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:15.287090   48339 type.go:168] "Request Body" body=""
	I1212 00:19:15.287165   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:15.287490   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:15.786172   48339 type.go:168] "Request Body" body=""
	I1212 00:19:15.786269   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:15.786579   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:16.286136   48339 type.go:168] "Request Body" body=""
	I1212 00:19:16.286210   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:16.286453   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:16.786222   48339 type.go:168] "Request Body" body=""
	I1212 00:19:16.786299   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:16.786659   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:17.286375   48339 type.go:168] "Request Body" body=""
	I1212 00:19:17.286453   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:17.286795   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:17.286857   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:17.786163   48339 type.go:168] "Request Body" body=""
	I1212 00:19:17.786237   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:17.786560   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:18.286451   48339 type.go:168] "Request Body" body=""
	I1212 00:19:18.286531   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:18.286856   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:18.786177   48339 type.go:168] "Request Body" body=""
	I1212 00:19:18.786251   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:18.786557   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:19.286160   48339 type.go:168] "Request Body" body=""
	I1212 00:19:19.286232   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:19.286485   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:19.786192   48339 type.go:168] "Request Body" body=""
	I1212 00:19:19.786263   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:19.786567   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:19.786614   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:20.286287   48339 type.go:168] "Request Body" body=""
	I1212 00:19:20.286370   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:20.286718   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:20.787029   48339 type.go:168] "Request Body" body=""
	I1212 00:19:20.787097   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:20.787342   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:21.287119   48339 type.go:168] "Request Body" body=""
	I1212 00:19:21.287198   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:21.287505   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:21.786177   48339 type.go:168] "Request Body" body=""
	I1212 00:19:21.786266   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:21.786579   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:22.287046   48339 type.go:168] "Request Body" body=""
	I1212 00:19:22.287111   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:22.287377   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:22.287420   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:22.786275   48339 type.go:168] "Request Body" body=""
	I1212 00:19:22.786343   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:22.786646   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:23.286598   48339 type.go:168] "Request Body" body=""
	I1212 00:19:23.286692   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:23.287042   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:23.786834   48339 type.go:168] "Request Body" body=""
	I1212 00:19:23.786913   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:23.787199   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:24.286916   48339 type.go:168] "Request Body" body=""
	I1212 00:19:24.287018   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:24.287331   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:24.787102   48339 type.go:168] "Request Body" body=""
	I1212 00:19:24.787174   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:24.787525   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:24.787578   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:25.286170   48339 type.go:168] "Request Body" body=""
	I1212 00:19:25.286246   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:25.286510   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:25.787014   48339 type.go:168] "Request Body" body=""
	I1212 00:19:25.787086   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:25.787411   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:26.286133   48339 type.go:168] "Request Body" body=""
	I1212 00:19:26.286205   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:26.286533   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:26.786125   48339 type.go:168] "Request Body" body=""
	I1212 00:19:26.786195   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:26.786499   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:27.286222   48339 type.go:168] "Request Body" body=""
	I1212 00:19:27.286294   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:27.286619   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:27.286677   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:27.786180   48339 type.go:168] "Request Body" body=""
	I1212 00:19:27.786252   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:27.786586   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:28.286372   48339 type.go:168] "Request Body" body=""
	I1212 00:19:28.286448   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:28.286700   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:28.786195   48339 type.go:168] "Request Body" body=""
	I1212 00:19:28.786271   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:28.786605   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:29.286191   48339 type.go:168] "Request Body" body=""
	I1212 00:19:29.286267   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:29.286615   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:29.786910   48339 type.go:168] "Request Body" body=""
	I1212 00:19:29.786981   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:29.787247   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:29.787287   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:30.287073   48339 type.go:168] "Request Body" body=""
	I1212 00:19:30.287154   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:30.287499   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:30.786184   48339 type.go:168] "Request Body" body=""
	I1212 00:19:30.786259   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:30.786602   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:31.286871   48339 type.go:168] "Request Body" body=""
	I1212 00:19:31.286942   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:31.287207   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:31.786942   48339 type.go:168] "Request Body" body=""
	I1212 00:19:31.787038   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:31.787334   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:31.787377   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:32.287019   48339 type.go:168] "Request Body" body=""
	I1212 00:19:32.287094   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:32.287431   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:32.786241   48339 type.go:168] "Request Body" body=""
	I1212 00:19:32.786308   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:32.786562   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:33.286586   48339 type.go:168] "Request Body" body=""
	I1212 00:19:33.286669   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:33.287081   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:33.786842   48339 type.go:168] "Request Body" body=""
	I1212 00:19:33.786915   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:33.787232   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:34.286965   48339 type.go:168] "Request Body" body=""
	I1212 00:19:34.287052   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:34.287321   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:34.287371   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:34.787103   48339 type.go:168] "Request Body" body=""
	I1212 00:19:34.787184   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:34.787507   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:35.286199   48339 type.go:168] "Request Body" body=""
	I1212 00:19:35.286275   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:35.286623   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:35.786303   48339 type.go:168] "Request Body" body=""
	I1212 00:19:35.786378   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:35.786633   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:36.286201   48339 type.go:168] "Request Body" body=""
	I1212 00:19:36.286276   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:36.286623   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:36.786173   48339 type.go:168] "Request Body" body=""
	I1212 00:19:36.786245   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:36.786551   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:36.786609   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:37.286157   48339 type.go:168] "Request Body" body=""
	I1212 00:19:37.286229   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:37.286482   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:37.786158   48339 type.go:168] "Request Body" body=""
	I1212 00:19:37.786237   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:37.786552   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:38.286494   48339 type.go:168] "Request Body" body=""
	I1212 00:19:38.286574   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:38.286901   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:38.786471   48339 type.go:168] "Request Body" body=""
	I1212 00:19:38.786543   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:38.786828   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:38.786871   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:39.286234   48339 type.go:168] "Request Body" body=""
	I1212 00:19:39.286306   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:39.286633   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:39.786191   48339 type.go:168] "Request Body" body=""
	I1212 00:19:39.786273   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:39.786591   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:40.286174   48339 type.go:168] "Request Body" body=""
	I1212 00:19:40.286246   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:40.286501   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:40.786212   48339 type.go:168] "Request Body" body=""
	I1212 00:19:40.786284   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:40.786618   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:41.286308   48339 type.go:168] "Request Body" body=""
	I1212 00:19:41.286385   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:41.286717   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:41.286778   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:41.786259   48339 type.go:168] "Request Body" body=""
	I1212 00:19:41.786336   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:41.786616   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:42.286266   48339 type.go:168] "Request Body" body=""
	I1212 00:19:42.286426   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:42.286836   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:42.786557   48339 type.go:168] "Request Body" body=""
	I1212 00:19:42.786636   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:42.786968   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:43.286831   48339 type.go:168] "Request Body" body=""
	I1212 00:19:43.286907   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:43.287195   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:43.287247   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:43.786980   48339 type.go:168] "Request Body" body=""
	I1212 00:19:43.787071   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:43.787383   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:44.286097   48339 type.go:168] "Request Body" body=""
	I1212 00:19:44.286182   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:44.286516   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:44.787095   48339 type.go:168] "Request Body" body=""
	I1212 00:19:44.787170   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:44.787420   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:45.286197   48339 type.go:168] "Request Body" body=""
	I1212 00:19:45.286315   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:45.286686   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:45.786212   48339 type.go:168] "Request Body" body=""
	I1212 00:19:45.786292   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:45.786616   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:45.786667   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:46.286316   48339 type.go:168] "Request Body" body=""
	I1212 00:19:46.286391   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:46.286672   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:46.786182   48339 type.go:168] "Request Body" body=""
	I1212 00:19:46.786255   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:46.786571   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:47.286220   48339 type.go:168] "Request Body" body=""
	I1212 00:19:47.286293   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:47.286639   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:47.787075   48339 type.go:168] "Request Body" body=""
	I1212 00:19:47.787141   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:47.787388   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:47.787425   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:48.286333   48339 type.go:168] "Request Body" body=""
	I1212 00:19:48.286406   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:48.286742   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:48.786260   48339 type.go:168] "Request Body" body=""
	I1212 00:19:48.786335   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:48.786670   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:49.286373   48339 type.go:168] "Request Body" body=""
	I1212 00:19:49.286448   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:49.286721   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:49.786393   48339 type.go:168] "Request Body" body=""
	I1212 00:19:49.786466   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:49.786793   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:50.286556   48339 type.go:168] "Request Body" body=""
	I1212 00:19:50.286645   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:50.286977   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:50.287046   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:50.786244   48339 type.go:168] "Request Body" body=""
	I1212 00:19:50.786323   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:50.786639   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:51.286207   48339 type.go:168] "Request Body" body=""
	I1212 00:19:51.286281   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:51.286646   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:51.786236   48339 type.go:168] "Request Body" body=""
	I1212 00:19:51.786326   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:51.786698   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:52.286383   48339 type.go:168] "Request Body" body=""
	I1212 00:19:52.286453   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:52.286705   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:52.786430   48339 type.go:168] "Request Body" body=""
	I1212 00:19:52.786502   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:52.786808   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:52.786864   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.286726   48339 type.go:168] "Request Body" body=""
	I1212 00:19:53.286799   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:53.287127   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:53.786892   48339 type.go:168] "Request Body" body=""
	I1212 00:19:53.786963   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:53.787281   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:54.287089   48339 type.go:168] "Request Body" body=""
	I1212 00:19:54.287161   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:54.287510   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:54.787071   48339 type.go:168] "Request Body" body=""
	I1212 00:19:54.787148   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:54.787473   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:54.787523   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:55.287042   48339 type.go:168] "Request Body" body=""
	I1212 00:19:55.287120   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:55.287397   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:55.786097   48339 type.go:168] "Request Body" body=""
	I1212 00:19:55.786167   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:55.786471   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:56.286180   48339 type.go:168] "Request Body" body=""
	I1212 00:19:56.286255   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:56.286560   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:56.786731   48339 type.go:168] "Request Body" body=""
	I1212 00:19:56.786834   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:56.787097   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:57.286916   48339 type.go:168] "Request Body" body=""
	I1212 00:19:57.287011   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:57.287338   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:57.287392   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:57.787113   48339 type.go:168] "Request Body" body=""
	I1212 00:19:57.787195   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:57.787542   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:58.286383   48339 type.go:168] "Request Body" body=""
	I1212 00:19:58.286455   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:58.286708   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:58.786168   48339 type.go:168] "Request Body" body=""
	I1212 00:19:58.786245   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:58.786576   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:59.286179   48339 type.go:168] "Request Body" body=""
	I1212 00:19:59.286256   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:59.286592   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:59.786275   48339 type.go:168] "Request Body" body=""
	I1212 00:19:59.786344   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:59.786595   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:59.786633   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:00.286342   48339 type.go:168] "Request Body" body=""
	I1212 00:20:00.286436   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:00.286738   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:00.786599   48339 type.go:168] "Request Body" body=""
	I1212 00:20:00.786680   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:00.787175   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:01.286983   48339 type.go:168] "Request Body" body=""
	I1212 00:20:01.287070   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:01.287375   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:01.787109   48339 type.go:168] "Request Body" body=""
	I1212 00:20:01.787182   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:01.787524   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:01.787578   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:02.286136   48339 type.go:168] "Request Body" body=""
	I1212 00:20:02.286214   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:02.286541   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:02.786447   48339 type.go:168] "Request Body" body=""
	I1212 00:20:02.786522   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:02.786791   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:03.286727   48339 type.go:168] "Request Body" body=""
	I1212 00:20:03.286808   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:03.287147   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:03.786954   48339 type.go:168] "Request Body" body=""
	I1212 00:20:03.787051   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:03.787411   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:04.287105   48339 type.go:168] "Request Body" body=""
	I1212 00:20:04.287184   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:04.287440   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:04.287480   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:04.786199   48339 type.go:168] "Request Body" body=""
	I1212 00:20:04.786275   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:04.786621   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:05.286300   48339 type.go:168] "Request Body" body=""
	I1212 00:20:05.286378   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:05.286699   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:05.786189   48339 type.go:168] "Request Body" body=""
	I1212 00:20:05.786286   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:05.786574   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:06.286216   48339 type.go:168] "Request Body" body=""
	I1212 00:20:06.286291   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:06.286653   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:06.786351   48339 type.go:168] "Request Body" body=""
	I1212 00:20:06.786425   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:06.786777   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:06.786833   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:07.286483   48339 type.go:168] "Request Body" body=""
	I1212 00:20:07.286562   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:07.286815   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:07.786485   48339 type.go:168] "Request Body" body=""
	I1212 00:20:07.786559   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:07.786920   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:08.286761   48339 type.go:168] "Request Body" body=""
	I1212 00:20:08.286836   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:08.287188   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:08.786941   48339 type.go:168] "Request Body" body=""
	I1212 00:20:08.787029   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:08.787324   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:08.787386   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:09.287127   48339 type.go:168] "Request Body" body=""
	I1212 00:20:09.287201   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:09.287579   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:09.786165   48339 type.go:168] "Request Body" body=""
	I1212 00:20:09.786264   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:09.786669   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:10.286348   48339 type.go:168] "Request Body" body=""
	I1212 00:20:10.286420   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:10.286711   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:10.786398   48339 type.go:168] "Request Body" body=""
	I1212 00:20:10.786476   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:10.786785   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:11.286171   48339 type.go:168] "Request Body" body=""
	I1212 00:20:11.286251   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:11.286562   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:11.286616   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:11.786160   48339 type.go:168] "Request Body" body=""
	I1212 00:20:11.786237   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:11.786560   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:12.286232   48339 type.go:168] "Request Body" body=""
	I1212 00:20:12.286313   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:12.286631   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:12.786525   48339 type.go:168] "Request Body" body=""
	I1212 00:20:12.786596   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:12.786927   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:13.286693   48339 type.go:168] "Request Body" body=""
	I1212 00:20:13.286759   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:13.287036   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:13.287076   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:13.786823   48339 type.go:168] "Request Body" body=""
	I1212 00:20:13.786903   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:13.787250   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:14.287106   48339 type.go:168] "Request Body" body=""
	I1212 00:20:14.287193   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:14.287515   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:14.786201   48339 type.go:168] "Request Body" body=""
	I1212 00:20:14.786277   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:14.786598   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:15.286226   48339 type.go:168] "Request Body" body=""
	I1212 00:20:15.286303   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:15.286675   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:15.786377   48339 type.go:168] "Request Body" body=""
	I1212 00:20:15.786454   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:15.786771   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:15.786825   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:16.286146   48339 type.go:168] "Request Body" body=""
	I1212 00:20:16.286230   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:16.286475   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:16.786183   48339 type.go:168] "Request Body" body=""
	I1212 00:20:16.786256   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:16.786581   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:17.286264   48339 type.go:168] "Request Body" body=""
	I1212 00:20:17.286366   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:17.286686   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:17.786376   48339 type.go:168] "Request Body" body=""
	I1212 00:20:17.786459   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:17.786714   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:18.286808   48339 type.go:168] "Request Body" body=""
	I1212 00:20:18.286881   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:18.287211   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:18.287257   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:18.787025   48339 type.go:168] "Request Body" body=""
	I1212 00:20:18.787098   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:18.787407   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:19.286245   48339 type.go:168] "Request Body" body=""
	I1212 00:20:19.286455   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:19.287173   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:19.786138   48339 type.go:168] "Request Body" body=""
	I1212 00:20:19.786225   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:19.786578   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:20.286250   48339 type.go:168] "Request Body" body=""
	I1212 00:20:20.286325   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:20.286634   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:20.786202   48339 type.go:168] "Request Body" body=""
	I1212 00:20:20.786280   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:20.786538   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:20.786583   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:21.286166   48339 type.go:168] "Request Body" body=""
	I1212 00:20:21.286242   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:21.286538   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:21.786130   48339 type.go:168] "Request Body" body=""
	I1212 00:20:21.786205   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:21.786517   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:22.287092   48339 type.go:168] "Request Body" body=""
	I1212 00:20:22.287164   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:22.287411   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:22.786375   48339 type.go:168] "Request Body" body=""
	I1212 00:20:22.786456   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:22.786778   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:22.786829   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:23.286658   48339 type.go:168] "Request Body" body=""
	I1212 00:20:23.286731   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:23.287085   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:23.786836   48339 type.go:168] "Request Body" body=""
	I1212 00:20:23.786908   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:23.787187   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:24.286964   48339 type.go:168] "Request Body" body=""
	I1212 00:20:24.287062   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:24.287428   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:24.786115   48339 type.go:168] "Request Body" body=""
	I1212 00:20:24.786188   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:24.786524   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:25.286237   48339 type.go:168] "Request Body" body=""
	I1212 00:20:25.286345   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:25.286768   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:25.286850   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:25.786511   48339 type.go:168] "Request Body" body=""
	I1212 00:20:25.786607   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:25.786978   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:26.286790   48339 type.go:168] "Request Body" body=""
	I1212 00:20:26.286875   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:26.287221   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:26.786820   48339 type.go:168] "Request Body" body=""
	I1212 00:20:26.786891   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:26.787243   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:27.287025   48339 type.go:168] "Request Body" body=""
	I1212 00:20:27.287103   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:27.287476   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:27.287533   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:27.786181   48339 type.go:168] "Request Body" body=""
	I1212 00:20:27.786256   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:27.786579   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:28.286328   48339 type.go:168] "Request Body" body=""
	I1212 00:20:28.286403   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:28.286680   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:28.786377   48339 type.go:168] "Request Body" body=""
	I1212 00:20:28.786452   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:28.786763   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:29.286254   48339 type.go:168] "Request Body" body=""
	I1212 00:20:29.286331   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:29.286614   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:29.787081   48339 type.go:168] "Request Body" body=""
	I1212 00:20:29.787157   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:29.787430   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:29.787484   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:30.286195   48339 type.go:168] "Request Body" body=""
	I1212 00:20:30.286367   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:30.286726   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:30.786404   48339 type.go:168] "Request Body" body=""
	I1212 00:20:30.786481   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:30.786819   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:31.286538   48339 type.go:168] "Request Body" body=""
	I1212 00:20:31.286615   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:31.286953   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:31.786734   48339 type.go:168] "Request Body" body=""
	I1212 00:20:31.786823   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:31.787169   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:32.286853   48339 type.go:168] "Request Body" body=""
	I1212 00:20:32.286946   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:32.287277   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:32.287336   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:32.786355   48339 type.go:168] "Request Body" body=""
	I1212 00:20:32.786440   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:32.786710   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:33.286694   48339 type.go:168] "Request Body" body=""
	I1212 00:20:33.286774   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:33.287132   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:33.786905   48339 type.go:168] "Request Body" body=""
	I1212 00:20:33.786983   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:33.787332   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:34.287037   48339 type.go:168] "Request Body" body=""
	I1212 00:20:34.287105   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:34.287355   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:34.287394   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:34.787091   48339 type.go:168] "Request Body" body=""
	I1212 00:20:34.787167   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:34.787475   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:35.286183   48339 type.go:168] "Request Body" body=""
	I1212 00:20:35.286264   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:35.286585   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:35.786156   48339 type.go:168] "Request Body" body=""
	I1212 00:20:35.786225   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:35.786538   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:36.286227   48339 type.go:168] "Request Body" body=""
	I1212 00:20:36.286330   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:36.286653   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:36.786356   48339 type.go:168] "Request Body" body=""
	I1212 00:20:36.786434   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:36.786764   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:36.786818   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:37.286091   48339 type.go:168] "Request Body" body=""
	I1212 00:20:37.286166   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:37.286500   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:37.786184   48339 type.go:168] "Request Body" body=""
	I1212 00:20:37.786263   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:37.786572   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:38.286481   48339 type.go:168] "Request Body" body=""
	I1212 00:20:38.286552   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:38.286881   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:38.786443   48339 type.go:168] "Request Body" body=""
	I1212 00:20:38.786517   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:38.786773   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:39.286212   48339 type.go:168] "Request Body" body=""
	I1212 00:20:39.286290   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:39.286616   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:39.286667   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:39.786165   48339 type.go:168] "Request Body" body=""
	I1212 00:20:39.786242   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:39.786530   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:40.286181   48339 type.go:168] "Request Body" body=""
	I1212 00:20:40.286252   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:40.286503   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:40.786171   48339 type.go:168] "Request Body" body=""
	I1212 00:20:40.786243   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:40.786563   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:41.286127   48339 type.go:168] "Request Body" body=""
	I1212 00:20:41.286208   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:41.286529   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:41.787086   48339 type.go:168] "Request Body" body=""
	I1212 00:20:41.787155   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:41.787421   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:41.787466   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:42.286149   48339 type.go:168] "Request Body" body=""
	I1212 00:20:42.286244   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:42.286590   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:42.786361   48339 type.go:168] "Request Body" body=""
	I1212 00:20:42.786438   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:42.786779   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:43.286633   48339 type.go:168] "Request Body" body=""
	I1212 00:20:43.286702   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:43.286960   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:43.786722   48339 type.go:168] "Request Body" body=""
	I1212 00:20:43.786804   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:43.787206   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:44.286934   48339 type.go:168] "Request Body" body=""
	I1212 00:20:44.287023   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:44.287351   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:44.287409   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:44.786842   48339 type.go:168] "Request Body" body=""
	I1212 00:20:44.786917   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:44.787191   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:45.286977   48339 type.go:168] "Request Body" body=""
	I1212 00:20:45.287067   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:45.287390   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:45.787177   48339 type.go:168] "Request Body" body=""
	I1212 00:20:45.787257   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:45.787616   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:46.286986   48339 type.go:168] "Request Body" body=""
	I1212 00:20:46.287083   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:46.287348   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:46.787132   48339 type.go:168] "Request Body" body=""
	I1212 00:20:46.787205   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:46.787529   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:46.787585   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:47.286216   48339 type.go:168] "Request Body" body=""
	I1212 00:20:47.286289   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:47.286635   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:47.787077   48339 type.go:168] "Request Body" body=""
	I1212 00:20:47.787165   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:47.787464   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:48.286384   48339 type.go:168] "Request Body" body=""
	I1212 00:20:48.286461   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:48.286804   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:48.786191   48339 type.go:168] "Request Body" body=""
	I1212 00:20:48.786264   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:48.786586   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:49.286166   48339 type.go:168] "Request Body" body=""
	I1212 00:20:49.286240   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:49.286495   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:49.286545   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:49.786168   48339 type.go:168] "Request Body" body=""
	I1212 00:20:49.786246   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:49.786526   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:50.286237   48339 type.go:168] "Request Body" body=""
	I1212 00:20:50.286315   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:50.286678   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:50.786121   48339 type.go:168] "Request Body" body=""
	I1212 00:20:50.786187   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:50.786438   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:51.286121   48339 type.go:168] "Request Body" body=""
	I1212 00:20:51.286198   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:51.286527   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:51.286572   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:51.786155   48339 type.go:168] "Request Body" body=""
	I1212 00:20:51.786230   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:51.786579   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:52.286140   48339 type.go:168] "Request Body" body=""
	I1212 00:20:52.286212   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:52.286463   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:52.786341   48339 type.go:168] "Request Body" body=""
	I1212 00:20:52.786421   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:52.786710   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:53.286513   48339 type.go:168] "Request Body" body=""
	I1212 00:20:53.286636   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:53.286976   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:53.287052   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:53.786687   48339 type.go:168] "Request Body" body=""
	I1212 00:20:53.786760   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:53.787036   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:54.286863   48339 type.go:168] "Request Body" body=""
	I1212 00:20:54.286939   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:54.287249   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:54.787061   48339 type.go:168] "Request Body" body=""
	I1212 00:20:54.787143   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:54.787476   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:55.286970   48339 type.go:168] "Request Body" body=""
	I1212 00:20:55.287058   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:55.287308   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:55.287347   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:55.786924   48339 type.go:168] "Request Body" body=""
	I1212 00:20:55.787017   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:55.787330   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:56.287109   48339 type.go:168] "Request Body" body=""
	I1212 00:20:56.287182   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:56.287490   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:56.786897   48339 type.go:168] "Request Body" body=""
	I1212 00:20:56.786972   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:56.787241   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:57.287067   48339 type.go:168] "Request Body" body=""
	I1212 00:20:57.287145   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:57.287509   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:57.287566   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:57.786231   48339 type.go:168] "Request Body" body=""
	I1212 00:20:57.786303   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:57.786626   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:58.286503   48339 type.go:168] "Request Body" body=""
	I1212 00:20:58.286567   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:58.286819   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:58.786175   48339 type.go:168] "Request Body" body=""
	I1212 00:20:58.786249   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:58.786577   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:59.286221   48339 type.go:168] "Request Body" body=""
	I1212 00:20:59.286300   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:59.286643   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:59.786201   48339 type.go:168] "Request Body" body=""
	I1212 00:20:59.786272   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:59.786717   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:59.786766   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:21:00.286416   48339 type.go:168] "Request Body" body=""
	I1212 00:21:00.286498   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:00.286792   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:00.786190   48339 type.go:168] "Request Body" body=""
	I1212 00:21:00.786269   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:00.786582   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:01.286121   48339 type.go:168] "Request Body" body=""
	I1212 00:21:01.286194   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:01.286449   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:01.786199   48339 type.go:168] "Request Body" body=""
	I1212 00:21:01.786294   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:01.786641   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:02.286227   48339 type.go:168] "Request Body" body=""
	I1212 00:21:02.286306   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:02.286637   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:21:02.286688   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:21:02.786377   48339 type.go:168] "Request Body" body=""
	I1212 00:21:02.786458   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:02.786789   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:03.286595   48339 type.go:168] "Request Body" body=""
	I1212 00:21:03.286680   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:03.287072   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:03.786847   48339 type.go:168] "Request Body" body=""
	I1212 00:21:03.786925   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:03.787257   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:04.287036   48339 type.go:168] "Request Body" body=""
	I1212 00:21:04.287108   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:04.287431   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:21:04.287477   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:21:04.786103   48339 type.go:168] "Request Body" body=""
	I1212 00:21:04.786178   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:04.786510   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:05.286213   48339 type.go:168] "Request Body" body=""
	I1212 00:21:05.286293   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:05.286653   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:05.786170   48339 type.go:168] "Request Body" body=""
	I1212 00:21:05.786239   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:05.786497   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:06.286230   48339 type.go:168] "Request Body" body=""
	I1212 00:21:06.286305   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:06.286647   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:06.786360   48339 type.go:168] "Request Body" body=""
	I1212 00:21:06.786435   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:06.786771   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:21:06.786825   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:21:07.286459   48339 type.go:168] "Request Body" body=""
	I1212 00:21:07.286536   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:07.286784   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:07.786179   48339 type.go:168] "Request Body" body=""
	I1212 00:21:07.786260   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:07.786613   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:08.286429   48339 type.go:168] "Request Body" body=""
	I1212 00:21:08.286512   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:08.286882   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:08.786159   48339 type.go:168] "Request Body" body=""
	I1212 00:21:08.786230   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:08.791780   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	W1212 00:21:08.791841   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:21:09.286491   48339 type.go:168] "Request Body" body=""
	I1212 00:21:09.286564   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:09.286869   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:09.786199   48339 type.go:168] "Request Body" body=""
	I1212 00:21:09.786280   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:09.786589   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:10.286143   48339 type.go:168] "Request Body" body=""
	I1212 00:21:10.286219   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:10.286481   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:10.786184   48339 type.go:168] "Request Body" body=""
	I1212 00:21:10.786253   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:10.786584   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:11.286268   48339 type.go:168] "Request Body" body=""
	I1212 00:21:11.286353   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:11.286684   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:21:11.286736   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:21:11.786169   48339 type.go:168] "Request Body" body=""
	I1212 00:21:11.786241   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:11.786538   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:12.286254   48339 type.go:168] "Request Body" body=""
	I1212 00:21:12.286329   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:12.286629   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:12.786499   48339 type.go:168] "Request Body" body=""
	I1212 00:21:12.786576   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:12.786914   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:13.286651   48339 type.go:168] "Request Body" body=""
	I1212 00:21:13.286728   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:13.286985   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:21:13.287050   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:21:13.786749   48339 type.go:168] "Request Body" body=""
	I1212 00:21:13.786806   48339 node_ready.go:38] duration metric: took 6m0.00081197s for node "functional-767012" to be "Ready" ...
	I1212 00:21:13.789905   48339 out.go:203] 
	W1212 00:21:13.792750   48339 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1212 00:21:13.792769   48339 out.go:285] * 
	W1212 00:21:13.794879   48339 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:21:13.797575   48339 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.783378403Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.783446235Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.783574556Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.783659160Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.783717228Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.783783379Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.783840183Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.783901205Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.783968570Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.784048940Z" level=info msg="Connect containerd service"
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.784416877Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.785157717Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.794662816Z" level=info msg="Start subscribing containerd event"
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.795424792Z" level=info msg="Start recovering state"
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.795822604Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.795946897Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.833749922Z" level=info msg="Start event monitor"
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.833976992Z" level=info msg="Start cni network conf syncer for default"
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.834049535Z" level=info msg="Start streaming server"
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.834116111Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.834171899Z" level=info msg="runtime interface starting up..."
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.834228252Z" level=info msg="starting plugins..."
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.834293377Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 00:15:10 functional-767012 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 12 00:15:10 functional-767012 containerd[5228]: time="2025-12-12T00:15:10.837148687Z" level=info msg="containerd successfully booted in 0.080353s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:21:18.036123    8560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:21:18.036600    8560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:21:18.038121    8560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:21:18.038456    8560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:21:18.039910    8560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec11 23:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014465] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.504479] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.038126] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.726220] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.947343] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 00:21:18 up  1:03,  0 user,  load average: 0.70, 0.37, 0.55
	Linux functional-767012 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 00:21:14 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:21:15 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 811.
	Dec 12 00:21:15 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:21:15 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:21:15 functional-767012 kubelet[8380]: E1212 00:21:15.601873    8380 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:21:15 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:21:15 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:21:16 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 812.
	Dec 12 00:21:16 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:21:16 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:21:16 functional-767012 kubelet[8434]: E1212 00:21:16.373640    8434 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:21:16 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:21:16 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:21:17 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 813.
	Dec 12 00:21:17 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:21:17 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:21:17 functional-767012 kubelet[8459]: E1212 00:21:17.086821    8459 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:21:17 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:21:17 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:21:17 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 814.
	Dec 12 00:21:17 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:21:17 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:21:17 functional-767012 kubelet[8508]: E1212 00:21:17.841678    8508 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:21:17 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:21:17 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-767012 -n functional-767012
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-767012 -n functional-767012: exit status 2 (331.630663ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-767012" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 kubectl -- --context functional-767012 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-767012 kubectl -- --context functional-767012 get pods: exit status 1 (111.490962ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-arm64 -p functional-767012 kubectl -- --context functional-767012 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-767012
helpers_test.go:244: (dbg) docker inspect functional-767012:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e",
	        "Created": "2025-12-12T00:06:52.261765556Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42951,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T00:06:52.317917194Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/hostname",
	        "HostsPath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/hosts",
	        "LogPath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e-json.log",
	        "Name": "/functional-767012",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-767012:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-767012",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e",
	                "LowerDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70-init/diff:/var/lib/docker/overlay2/4c0e5370e4fd7b4e6c6a79620ef377d7d55826709cd277e0cfa49c6005af0314/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-767012",
	                "Source": "/var/lib/docker/volumes/functional-767012/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-767012",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-767012",
	                "name.minikube.sigs.k8s.io": "functional-767012",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e781257da3adf1d3284ab2a6de0168c3db7957f25a7e53d0015250294193762d",
	            "SandboxKey": "/var/run/docker/netns/e781257da3ad",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-767012": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:4d:78:ba:7d:83",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "83467cc4cb13818b98ec0d7cb5fc0064ea6eb2c8db4256a8a81330921aa2d9a4",
	                    "EndpointID": "b787b732d8d748776ceeb6e65fab51cc1e79758446bc85ac20043b35593fab12",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-767012",
	                        "6585a82fe5e6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-767012 -n functional-767012
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-767012 -n functional-767012: exit status 2 (320.620328ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-095481 image ls --format yaml --alsologtostderr                                                                                              │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image   │ functional-095481 image ls --format short --alsologtostderr                                                                                             │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image   │ functional-095481 image ls --format table --alsologtostderr                                                                                             │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image   │ functional-095481 image ls --format json --alsologtostderr                                                                                              │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ ssh     │ functional-095481 ssh pgrep buildkitd                                                                                                                   │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │                     │
	│ image   │ functional-095481 image build -t localhost/my-image:functional-095481 testdata/build --alsologtostderr                                                  │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image   │ functional-095481 image ls                                                                                                                              │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ delete  │ -p functional-095481                                                                                                                                    │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ start   │ -p functional-767012 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │                     │
	│ start   │ -p functional-767012 --alsologtostderr -v=8                                                                                                             │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:15 UTC │                     │
	│ cache   │ functional-767012 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ cache   │ functional-767012 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ cache   │ functional-767012 cache add registry.k8s.io/pause:latest                                                                                                │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ cache   │ functional-767012 cache add minikube-local-cache-test:functional-767012                                                                                 │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ cache   │ functional-767012 cache delete minikube-local-cache-test:functional-767012                                                                              │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ cache   │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ ssh     │ functional-767012 ssh sudo crictl images                                                                                                                │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ ssh     │ functional-767012 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ ssh     │ functional-767012 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │                     │
	│ cache   │ functional-767012 cache reload                                                                                                                          │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ ssh     │ functional-767012 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ kubectl │ functional-767012 kubectl -- --context functional-767012 get pods                                                                                       │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:15:08
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:15:08.188216   48339 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:15:08.188435   48339 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:15:08.188463   48339 out.go:374] Setting ErrFile to fd 2...
	I1212 00:15:08.188485   48339 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:15:08.188893   48339 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	I1212 00:15:08.189436   48339 out.go:368] Setting JSON to false
	I1212 00:15:08.190327   48339 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3455,"bootTime":1765495054,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1212 00:15:08.190468   48339 start.go:143] virtualization:  
	I1212 00:15:08.194075   48339 out.go:179] * [functional-767012] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 00:15:08.197745   48339 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:15:08.197889   48339 notify.go:221] Checking for updates...
	I1212 00:15:08.203623   48339 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:15:08.206559   48339 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 00:15:08.209313   48339 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	I1212 00:15:08.212202   48339 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:15:08.215231   48339 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:15:08.218454   48339 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 00:15:08.218601   48339 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:15:08.244528   48339 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 00:15:08.244655   48339 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:15:08.299617   48339 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 00:15:08.290252755 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:15:08.299730   48339 docker.go:319] overlay module found
	I1212 00:15:08.302863   48339 out.go:179] * Using the docker driver based on existing profile
	I1212 00:15:08.305730   48339 start.go:309] selected driver: docker
	I1212 00:15:08.305754   48339 start.go:927] validating driver "docker" against &{Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:15:08.305854   48339 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:15:08.305953   48339 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:15:08.359436   48339 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 00:15:08.349975764 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:15:08.359860   48339 cni.go:84] Creating CNI manager for ""
	I1212 00:15:08.359920   48339 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 00:15:08.359966   48339 start.go:353] cluster config:
	{Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:15:08.363136   48339 out.go:179] * Starting "functional-767012" primary control-plane node in "functional-767012" cluster
	I1212 00:15:08.365917   48339 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1212 00:15:08.368829   48339 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 00:15:08.371809   48339 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 00:15:08.371858   48339 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1212 00:15:08.371872   48339 cache.go:65] Caching tarball of preloaded images
	I1212 00:15:08.371970   48339 preload.go:238] Found /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1212 00:15:08.371992   48339 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1212 00:15:08.372099   48339 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/config.json ...
	I1212 00:15:08.372328   48339 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 00:15:08.391509   48339 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 00:15:08.391533   48339 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 00:15:08.391552   48339 cache.go:243] Successfully downloaded all kic artifacts
	I1212 00:15:08.391583   48339 start.go:360] acquireMachinesLock for functional-767012: {Name:mk41cf89e93a3830367886ebbef2bb8f6e99e3f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:15:08.391643   48339 start.go:364] duration metric: took 36.464µs to acquireMachinesLock for "functional-767012"
	I1212 00:15:08.391666   48339 start.go:96] Skipping create...Using existing machine configuration
	I1212 00:15:08.391675   48339 fix.go:54] fixHost starting: 
	I1212 00:15:08.391939   48339 cli_runner.go:164] Run: docker container inspect functional-767012 --format={{.State.Status}}
	I1212 00:15:08.408717   48339 fix.go:112] recreateIfNeeded on functional-767012: state=Running err=<nil>
	W1212 00:15:08.408748   48339 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 00:15:08.411849   48339 out.go:252] * Updating the running docker "functional-767012" container ...
	I1212 00:15:08.411881   48339 machine.go:94] provisionDockerMachine start ...
	I1212 00:15:08.411961   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:08.429482   48339 main.go:143] libmachine: Using SSH client type: native
	I1212 00:15:08.429817   48339 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 00:15:08.429834   48339 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 00:15:08.578648   48339 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-767012
	
	I1212 00:15:08.578671   48339 ubuntu.go:182] provisioning hostname "functional-767012"
	I1212 00:15:08.578741   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:08.596871   48339 main.go:143] libmachine: Using SSH client type: native
	I1212 00:15:08.597187   48339 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 00:15:08.597227   48339 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-767012 && echo "functional-767012" | sudo tee /etc/hostname
	I1212 00:15:08.759668   48339 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-767012
	
	I1212 00:15:08.759746   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:08.776780   48339 main.go:143] libmachine: Using SSH client type: native
	I1212 00:15:08.777096   48339 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 00:15:08.777119   48339 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-767012' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-767012/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-767012' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:15:08.931523   48339 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:15:08.931550   48339 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-2343/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-2343/.minikube}
	I1212 00:15:08.931582   48339 ubuntu.go:190] setting up certificates
	I1212 00:15:08.931592   48339 provision.go:84] configureAuth start
	I1212 00:15:08.931653   48339 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-767012
	I1212 00:15:08.952406   48339 provision.go:143] copyHostCerts
	I1212 00:15:08.952454   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem
	I1212 00:15:08.952497   48339 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem, removing ...
	I1212 00:15:08.952507   48339 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem
	I1212 00:15:08.952585   48339 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem (1675 bytes)
	I1212 00:15:08.952685   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem
	I1212 00:15:08.952707   48339 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem, removing ...
	I1212 00:15:08.952712   48339 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem
	I1212 00:15:08.952745   48339 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem (1082 bytes)
	I1212 00:15:08.952800   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem
	I1212 00:15:08.952821   48339 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem, removing ...
	I1212 00:15:08.952828   48339 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem
	I1212 00:15:08.952852   48339 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem (1123 bytes)
	I1212 00:15:08.952913   48339 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem org=jenkins.functional-767012 san=[127.0.0.1 192.168.49.2 functional-767012 localhost minikube]
	I1212 00:15:09.089842   48339 provision.go:177] copyRemoteCerts
	I1212 00:15:09.089908   48339 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:15:09.089956   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:09.108065   48339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:15:09.210645   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 00:15:09.210700   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 00:15:09.228116   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 00:15:09.228176   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 00:15:09.245824   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 00:15:09.245889   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:15:09.263086   48339 provision.go:87] duration metric: took 331.470752ms to configureAuth
	I1212 00:15:09.263116   48339 ubuntu.go:206] setting minikube options for container-runtime
	I1212 00:15:09.263293   48339 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 00:15:09.263306   48339 machine.go:97] duration metric: took 851.418761ms to provisionDockerMachine
	I1212 00:15:09.263315   48339 start.go:293] postStartSetup for "functional-767012" (driver="docker")
	I1212 00:15:09.263326   48339 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:15:09.263390   48339 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:15:09.263439   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:09.281753   48339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:15:09.386868   48339 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:15:09.390421   48339 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1212 00:15:09.390442   48339 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1212 00:15:09.390447   48339 command_runner.go:130] > VERSION_ID="12"
	I1212 00:15:09.390451   48339 command_runner.go:130] > VERSION="12 (bookworm)"
	I1212 00:15:09.390456   48339 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1212 00:15:09.390460   48339 command_runner.go:130] > ID=debian
	I1212 00:15:09.390464   48339 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1212 00:15:09.390469   48339 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1212 00:15:09.390475   48339 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1212 00:15:09.390546   48339 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:15:09.390568   48339 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 00:15:09.390580   48339 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/addons for local assets ...
	I1212 00:15:09.390640   48339 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/files for local assets ...
	I1212 00:15:09.390732   48339 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem -> 42902.pem in /etc/ssl/certs
	I1212 00:15:09.390742   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem -> /etc/ssl/certs/42902.pem
	I1212 00:15:09.390816   48339 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/test/nested/copy/4290/hosts -> hosts in /etc/test/nested/copy/4290
	I1212 00:15:09.390824   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/test/nested/copy/4290/hosts -> /etc/test/nested/copy/4290/hosts
	I1212 00:15:09.390867   48339 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4290
	I1212 00:15:09.398526   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /etc/ssl/certs/42902.pem (1708 bytes)
	I1212 00:15:09.416059   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/test/nested/copy/4290/hosts --> /etc/test/nested/copy/4290/hosts (40 bytes)
	I1212 00:15:09.433237   48339 start.go:296] duration metric: took 169.908089ms for postStartSetup
	I1212 00:15:09.433321   48339 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:15:09.433384   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:09.450800   48339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:15:09.556105   48339 command_runner.go:130] > 14%
	I1212 00:15:09.557034   48339 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:15:09.562380   48339 command_runner.go:130] > 169G
	I1212 00:15:09.562946   48339 fix.go:56] duration metric: took 1.171267005s for fixHost
	I1212 00:15:09.562967   48339 start.go:83] releasing machines lock for "functional-767012", held for 1.171312429s
	I1212 00:15:09.563050   48339 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-767012
	I1212 00:15:09.582602   48339 ssh_runner.go:195] Run: cat /version.json
	I1212 00:15:09.582654   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:09.582889   48339 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:15:09.582947   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:09.601106   48339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:15:09.627042   48339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:15:09.706722   48339 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1212 00:15:09.706847   48339 ssh_runner.go:195] Run: systemctl --version
	I1212 00:15:09.800321   48339 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 00:15:09.800390   48339 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1212 00:15:09.800423   48339 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1212 00:15:09.800514   48339 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 00:15:09.804624   48339 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 00:15:09.804945   48339 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:15:09.805036   48339 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:15:09.812955   48339 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 00:15:09.813030   48339 start.go:496] detecting cgroup driver to use...
	I1212 00:15:09.813095   48339 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 00:15:09.813242   48339 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 00:15:09.829352   48339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 00:15:09.842558   48339 docker.go:218] disabling cri-docker service (if available) ...
	I1212 00:15:09.842620   48339 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:15:09.858553   48339 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:15:09.872251   48339 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:15:10.008398   48339 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:15:10.140361   48339 docker.go:234] disabling docker service ...
	I1212 00:15:10.140425   48339 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:15:10.156860   48339 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:15:10.170461   48339 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:15:10.304156   48339 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:15:10.452566   48339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:15:10.465745   48339 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:15:10.479553   48339 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1212 00:15:10.480868   48339 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 00:15:10.489677   48339 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 00:15:10.498827   48339 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 00:15:10.498939   48339 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 00:15:10.508103   48339 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 00:15:10.516726   48339 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 00:15:10.525281   48339 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 00:15:10.533906   48339 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:15:10.541697   48339 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 00:15:10.550595   48339 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 00:15:10.559645   48339 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 00:15:10.568588   48339 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:15:10.575412   48339 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 00:15:10.576366   48339 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:15:10.583788   48339 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:15:10.698857   48339 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 00:15:10.837222   48339 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1212 00:15:10.837316   48339 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1212 00:15:10.841505   48339 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1212 00:15:10.841543   48339 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 00:15:10.841551   48339 command_runner.go:130] > Device: 0,72	Inode: 1614        Links: 1
	I1212 00:15:10.841558   48339 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 00:15:10.841564   48339 command_runner.go:130] > Access: 2025-12-12 00:15:10.793315522 +0000
	I1212 00:15:10.841569   48339 command_runner.go:130] > Modify: 2025-12-12 00:15:10.793315522 +0000
	I1212 00:15:10.841575   48339 command_runner.go:130] > Change: 2025-12-12 00:15:10.793315522 +0000
	I1212 00:15:10.841583   48339 command_runner.go:130] >  Birth: -
	I1212 00:15:10.841612   48339 start.go:564] Will wait 60s for crictl version
	I1212 00:15:10.841667   48339 ssh_runner.go:195] Run: which crictl
	I1212 00:15:10.845418   48339 command_runner.go:130] > /usr/local/bin/crictl
	I1212 00:15:10.845528   48339 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 00:15:10.867684   48339 command_runner.go:130] > Version:  0.1.0
	I1212 00:15:10.867710   48339 command_runner.go:130] > RuntimeName:  containerd
	I1212 00:15:10.867718   48339 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1212 00:15:10.867725   48339 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 00:15:10.869691   48339 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1212 00:15:10.869761   48339 ssh_runner.go:195] Run: containerd --version
	I1212 00:15:10.889630   48339 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1212 00:15:10.891644   48339 ssh_runner.go:195] Run: containerd --version
	I1212 00:15:10.909520   48339 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1212 00:15:10.917318   48339 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1212 00:15:10.920211   48339 cli_runner.go:164] Run: docker network inspect functional-767012 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:15:10.936971   48339 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 00:15:10.940949   48339 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1212 00:15:10.941183   48339 kubeadm.go:884] updating cluster {Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:15:10.941314   48339 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 00:15:10.941401   48339 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:15:10.964902   48339 command_runner.go:130] > {
	I1212 00:15:10.964923   48339 command_runner.go:130] >   "images":  [
	I1212 00:15:10.964934   48339 command_runner.go:130] >     {
	I1212 00:15:10.964944   48339 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1212 00:15:10.964949   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.964954   48339 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1212 00:15:10.964957   48339 command_runner.go:130] >       ],
	I1212 00:15:10.964962   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.964974   48339 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1212 00:15:10.964977   48339 command_runner.go:130] >       ],
	I1212 00:15:10.964982   48339 command_runner.go:130] >       "size":  "40636774",
	I1212 00:15:10.964989   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.964994   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.965005   48339 command_runner.go:130] >     },
	I1212 00:15:10.965009   48339 command_runner.go:130] >     {
	I1212 00:15:10.965017   48339 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1212 00:15:10.965023   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.965029   48339 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 00:15:10.965032   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965036   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.965047   48339 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1212 00:15:10.965050   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965054   48339 command_runner.go:130] >       "size":  "8034419",
	I1212 00:15:10.965058   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.965062   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.965068   48339 command_runner.go:130] >     },
	I1212 00:15:10.965071   48339 command_runner.go:130] >     {
	I1212 00:15:10.965079   48339 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1212 00:15:10.965085   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.965092   48339 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1212 00:15:10.965095   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965101   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.965112   48339 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1212 00:15:10.965115   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965121   48339 command_runner.go:130] >       "size":  "21168808",
	I1212 00:15:10.965129   48339 command_runner.go:130] >       "username":  "nonroot",
	I1212 00:15:10.965134   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.965137   48339 command_runner.go:130] >     },
	I1212 00:15:10.965143   48339 command_runner.go:130] >     {
	I1212 00:15:10.965152   48339 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1212 00:15:10.965164   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.965169   48339 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1212 00:15:10.965172   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965176   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.965190   48339 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1212 00:15:10.965193   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965199   48339 command_runner.go:130] >       "size":  "21136588",
	I1212 00:15:10.965203   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.965218   48339 command_runner.go:130] >         "value":  "0"
	I1212 00:15:10.965224   48339 command_runner.go:130] >       },
	I1212 00:15:10.965228   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.965231   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.965235   48339 command_runner.go:130] >     },
	I1212 00:15:10.965238   48339 command_runner.go:130] >     {
	I1212 00:15:10.965245   48339 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1212 00:15:10.965251   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.965256   48339 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1212 00:15:10.965262   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965266   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.965274   48339 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1212 00:15:10.965278   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965285   48339 command_runner.go:130] >       "size":  "24678359",
	I1212 00:15:10.965288   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.965296   48339 command_runner.go:130] >         "value":  "0"
	I1212 00:15:10.965302   48339 command_runner.go:130] >       },
	I1212 00:15:10.965306   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.965311   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.965314   48339 command_runner.go:130] >     },
	I1212 00:15:10.965323   48339 command_runner.go:130] >     {
	I1212 00:15:10.965332   48339 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1212 00:15:10.965345   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.965350   48339 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1212 00:15:10.965354   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965358   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.965373   48339 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1212 00:15:10.965377   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965381   48339 command_runner.go:130] >       "size":  "20661043",
	I1212 00:15:10.965385   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.965392   48339 command_runner.go:130] >         "value":  "0"
	I1212 00:15:10.965395   48339 command_runner.go:130] >       },
	I1212 00:15:10.965399   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.965403   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.965406   48339 command_runner.go:130] >     },
	I1212 00:15:10.965412   48339 command_runner.go:130] >     {
	I1212 00:15:10.965420   48339 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1212 00:15:10.965426   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.965431   48339 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1212 00:15:10.965434   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965438   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.965446   48339 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1212 00:15:10.965453   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965457   48339 command_runner.go:130] >       "size":  "22429671",
	I1212 00:15:10.965461   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.965465   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.965469   48339 command_runner.go:130] >     },
	I1212 00:15:10.965475   48339 command_runner.go:130] >     {
	I1212 00:15:10.965482   48339 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1212 00:15:10.965486   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.965492   48339 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1212 00:15:10.965497   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965502   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.965515   48339 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1212 00:15:10.965522   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965526   48339 command_runner.go:130] >       "size":  "15391364",
	I1212 00:15:10.965530   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.965534   48339 command_runner.go:130] >         "value":  "0"
	I1212 00:15:10.965539   48339 command_runner.go:130] >       },
	I1212 00:15:10.965543   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.965553   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.965556   48339 command_runner.go:130] >     },
	I1212 00:15:10.965559   48339 command_runner.go:130] >     {
	I1212 00:15:10.965566   48339 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1212 00:15:10.965570   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.965574   48339 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1212 00:15:10.965578   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965582   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.965591   48339 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1212 00:15:10.965602   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965606   48339 command_runner.go:130] >       "size":  "267939",
	I1212 00:15:10.965610   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.965614   48339 command_runner.go:130] >         "value":  "65535"
	I1212 00:15:10.965617   48339 command_runner.go:130] >       },
	I1212 00:15:10.965628   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.965632   48339 command_runner.go:130] >       "pinned":  true
	I1212 00:15:10.965635   48339 command_runner.go:130] >     }
	I1212 00:15:10.965638   48339 command_runner.go:130] >   ]
	I1212 00:15:10.965640   48339 command_runner.go:130] > }
	I1212 00:15:10.968555   48339 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 00:15:10.968581   48339 containerd.go:534] Images already preloaded, skipping extraction
	I1212 00:15:10.968640   48339 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:15:10.995305   48339 command_runner.go:130] > {
	I1212 00:15:10.995329   48339 command_runner.go:130] >   "images":  [
	I1212 00:15:10.995334   48339 command_runner.go:130] >     {
	I1212 00:15:10.995344   48339 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1212 00:15:10.995349   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.995355   48339 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1212 00:15:10.995359   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995375   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.995392   48339 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1212 00:15:10.995395   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995400   48339 command_runner.go:130] >       "size":  "40636774",
	I1212 00:15:10.995404   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.995408   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.995414   48339 command_runner.go:130] >     },
	I1212 00:15:10.995418   48339 command_runner.go:130] >     {
	I1212 00:15:10.995429   48339 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1212 00:15:10.995438   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.995444   48339 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 00:15:10.995448   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995452   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.995466   48339 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1212 00:15:10.995470   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995475   48339 command_runner.go:130] >       "size":  "8034419",
	I1212 00:15:10.995483   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.995487   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.995490   48339 command_runner.go:130] >     },
	I1212 00:15:10.995493   48339 command_runner.go:130] >     {
	I1212 00:15:10.995500   48339 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1212 00:15:10.995506   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.995512   48339 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1212 00:15:10.995515   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995524   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.995536   48339 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1212 00:15:10.995540   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995544   48339 command_runner.go:130] >       "size":  "21168808",
	I1212 00:15:10.995554   48339 command_runner.go:130] >       "username":  "nonroot",
	I1212 00:15:10.995558   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.995561   48339 command_runner.go:130] >     },
	I1212 00:15:10.995564   48339 command_runner.go:130] >     {
	I1212 00:15:10.995572   48339 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1212 00:15:10.995583   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.995588   48339 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1212 00:15:10.995592   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995596   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.995603   48339 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1212 00:15:10.995611   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995615   48339 command_runner.go:130] >       "size":  "21136588",
	I1212 00:15:10.995619   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.995623   48339 command_runner.go:130] >         "value":  "0"
	I1212 00:15:10.995631   48339 command_runner.go:130] >       },
	I1212 00:15:10.995635   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.995639   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.995642   48339 command_runner.go:130] >     },
	I1212 00:15:10.995646   48339 command_runner.go:130] >     {
	I1212 00:15:10.995659   48339 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1212 00:15:10.995663   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.995678   48339 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1212 00:15:10.995687   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995692   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.995701   48339 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1212 00:15:10.995709   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995713   48339 command_runner.go:130] >       "size":  "24678359",
	I1212 00:15:10.995716   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.995727   48339 command_runner.go:130] >         "value":  "0"
	I1212 00:15:10.995734   48339 command_runner.go:130] >       },
	I1212 00:15:10.995738   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.995743   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.995746   48339 command_runner.go:130] >     },
	I1212 00:15:10.995749   48339 command_runner.go:130] >     {
	I1212 00:15:10.995756   48339 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1212 00:15:10.995762   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.995768   48339 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1212 00:15:10.995771   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995782   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.995795   48339 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1212 00:15:10.995798   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995802   48339 command_runner.go:130] >       "size":  "20661043",
	I1212 00:15:10.995811   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.995815   48339 command_runner.go:130] >         "value":  "0"
	I1212 00:15:10.995820   48339 command_runner.go:130] >       },
	I1212 00:15:10.995830   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.995834   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.995838   48339 command_runner.go:130] >     },
	I1212 00:15:10.995841   48339 command_runner.go:130] >     {
	I1212 00:15:10.995847   48339 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1212 00:15:10.995854   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.995859   48339 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1212 00:15:10.995863   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995867   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.995877   48339 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1212 00:15:10.995884   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995888   48339 command_runner.go:130] >       "size":  "22429671",
	I1212 00:15:10.995893   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.995902   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.995906   48339 command_runner.go:130] >     },
	I1212 00:15:10.995909   48339 command_runner.go:130] >     {
	I1212 00:15:10.995916   48339 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1212 00:15:10.995924   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.995929   48339 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1212 00:15:10.995933   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995937   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.995948   48339 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1212 00:15:10.995952   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995956   48339 command_runner.go:130] >       "size":  "15391364",
	I1212 00:15:10.995963   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.995967   48339 command_runner.go:130] >         "value":  "0"
	I1212 00:15:10.995983   48339 command_runner.go:130] >       },
	I1212 00:15:10.995993   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.995997   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.996001   48339 command_runner.go:130] >     },
	I1212 00:15:10.996004   48339 command_runner.go:130] >     {
	I1212 00:15:10.996011   48339 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1212 00:15:10.996020   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.996025   48339 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1212 00:15:10.996029   48339 command_runner.go:130] >       ],
	I1212 00:15:10.996033   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.996046   48339 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1212 00:15:10.996053   48339 command_runner.go:130] >       ],
	I1212 00:15:10.996057   48339 command_runner.go:130] >       "size":  "267939",
	I1212 00:15:10.996061   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.996065   48339 command_runner.go:130] >         "value":  "65535"
	I1212 00:15:10.996074   48339 command_runner.go:130] >       },
	I1212 00:15:10.996078   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.996086   48339 command_runner.go:130] >       "pinned":  true
	I1212 00:15:10.996089   48339 command_runner.go:130] >     }
	I1212 00:15:10.996095   48339 command_runner.go:130] >   ]
	I1212 00:15:10.996103   48339 command_runner.go:130] > }
	I1212 00:15:10.997943   48339 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 00:15:10.997972   48339 cache_images.go:86] Images are preloaded, skipping loading
	I1212 00:15:10.997981   48339 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1212 00:15:10.998119   48339 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-767012 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:15:10.998212   48339 ssh_runner.go:195] Run: sudo crictl info
	I1212 00:15:11.021367   48339 command_runner.go:130] > {
	I1212 00:15:11.021387   48339 command_runner.go:130] >   "cniconfig": {
	I1212 00:15:11.021393   48339 command_runner.go:130] >     "Networks": [
	I1212 00:15:11.021397   48339 command_runner.go:130] >       {
	I1212 00:15:11.021403   48339 command_runner.go:130] >         "Config": {
	I1212 00:15:11.021408   48339 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1212 00:15:11.021413   48339 command_runner.go:130] >           "Name": "cni-loopback",
	I1212 00:15:11.021418   48339 command_runner.go:130] >           "Plugins": [
	I1212 00:15:11.021422   48339 command_runner.go:130] >             {
	I1212 00:15:11.021426   48339 command_runner.go:130] >               "Network": {
	I1212 00:15:11.021430   48339 command_runner.go:130] >                 "ipam": {},
	I1212 00:15:11.021438   48339 command_runner.go:130] >                 "type": "loopback"
	I1212 00:15:11.021445   48339 command_runner.go:130] >               },
	I1212 00:15:11.021450   48339 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1212 00:15:11.021457   48339 command_runner.go:130] >             }
	I1212 00:15:11.021461   48339 command_runner.go:130] >           ],
	I1212 00:15:11.021470   48339 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1212 00:15:11.021474   48339 command_runner.go:130] >         },
	I1212 00:15:11.021485   48339 command_runner.go:130] >         "IFName": "lo"
	I1212 00:15:11.021489   48339 command_runner.go:130] >       }
	I1212 00:15:11.021493   48339 command_runner.go:130] >     ],
	I1212 00:15:11.021498   48339 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1212 00:15:11.021504   48339 command_runner.go:130] >     "PluginDirs": [
	I1212 00:15:11.021509   48339 command_runner.go:130] >       "/opt/cni/bin"
	I1212 00:15:11.021514   48339 command_runner.go:130] >     ],
	I1212 00:15:11.021525   48339 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1212 00:15:11.021533   48339 command_runner.go:130] >     "Prefix": "eth"
	I1212 00:15:11.021537   48339 command_runner.go:130] >   },
	I1212 00:15:11.021540   48339 command_runner.go:130] >   "config": {
	I1212 00:15:11.021546   48339 command_runner.go:130] >     "cdiSpecDirs": [
	I1212 00:15:11.021552   48339 command_runner.go:130] >       "/etc/cdi",
	I1212 00:15:11.021558   48339 command_runner.go:130] >       "/var/run/cdi"
	I1212 00:15:11.021560   48339 command_runner.go:130] >     ],
	I1212 00:15:11.021563   48339 command_runner.go:130] >     "cni": {
	I1212 00:15:11.021567   48339 command_runner.go:130] >       "binDir": "",
	I1212 00:15:11.021571   48339 command_runner.go:130] >       "binDirs": [
	I1212 00:15:11.021574   48339 command_runner.go:130] >         "/opt/cni/bin"
	I1212 00:15:11.021577   48339 command_runner.go:130] >       ],
	I1212 00:15:11.021582   48339 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1212 00:15:11.021585   48339 command_runner.go:130] >       "confTemplate": "",
	I1212 00:15:11.021589   48339 command_runner.go:130] >       "ipPref": "",
	I1212 00:15:11.021592   48339 command_runner.go:130] >       "maxConfNum": 1,
	I1212 00:15:11.021597   48339 command_runner.go:130] >       "setupSerially": false,
	I1212 00:15:11.021601   48339 command_runner.go:130] >       "useInternalLoopback": false
	I1212 00:15:11.021604   48339 command_runner.go:130] >     },
	I1212 00:15:11.021610   48339 command_runner.go:130] >     "containerd": {
	I1212 00:15:11.021614   48339 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1212 00:15:11.021619   48339 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1212 00:15:11.021624   48339 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1212 00:15:11.021627   48339 command_runner.go:130] >       "runtimes": {
	I1212 00:15:11.021630   48339 command_runner.go:130] >         "runc": {
	I1212 00:15:11.021635   48339 command_runner.go:130] >           "ContainerAnnotations": null,
	I1212 00:15:11.021639   48339 command_runner.go:130] >           "PodAnnotations": null,
	I1212 00:15:11.021644   48339 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1212 00:15:11.021648   48339 command_runner.go:130] >           "cgroupWritable": false,
	I1212 00:15:11.021652   48339 command_runner.go:130] >           "cniConfDir": "",
	I1212 00:15:11.021656   48339 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1212 00:15:11.021664   48339 command_runner.go:130] >           "io_type": "",
	I1212 00:15:11.021670   48339 command_runner.go:130] >           "options": {
	I1212 00:15:11.021675   48339 command_runner.go:130] >             "BinaryName": "",
	I1212 00:15:11.021683   48339 command_runner.go:130] >             "CriuImagePath": "",
	I1212 00:15:11.021695   48339 command_runner.go:130] >             "CriuWorkPath": "",
	I1212 00:15:11.021703   48339 command_runner.go:130] >             "IoGid": 0,
	I1212 00:15:11.021708   48339 command_runner.go:130] >             "IoUid": 0,
	I1212 00:15:11.021712   48339 command_runner.go:130] >             "NoNewKeyring": false,
	I1212 00:15:11.021716   48339 command_runner.go:130] >             "Root": "",
	I1212 00:15:11.021723   48339 command_runner.go:130] >             "ShimCgroup": "",
	I1212 00:15:11.021728   48339 command_runner.go:130] >             "SystemdCgroup": false
	I1212 00:15:11.021734   48339 command_runner.go:130] >           },
	I1212 00:15:11.021739   48339 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1212 00:15:11.021745   48339 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1212 00:15:11.021749   48339 command_runner.go:130] >           "runtimePath": "",
	I1212 00:15:11.021755   48339 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1212 00:15:11.021761   48339 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1212 00:15:11.021765   48339 command_runner.go:130] >           "snapshotter": ""
	I1212 00:15:11.021770   48339 command_runner.go:130] >         }
	I1212 00:15:11.021774   48339 command_runner.go:130] >       }
	I1212 00:15:11.021778   48339 command_runner.go:130] >     },
	I1212 00:15:11.021790   48339 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1212 00:15:11.021799   48339 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1212 00:15:11.021805   48339 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1212 00:15:11.021810   48339 command_runner.go:130] >     "disableApparmor": false,
	I1212 00:15:11.021816   48339 command_runner.go:130] >     "disableHugetlbController": true,
	I1212 00:15:11.021821   48339 command_runner.go:130] >     "disableProcMount": false,
	I1212 00:15:11.021825   48339 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1212 00:15:11.021828   48339 command_runner.go:130] >     "enableCDI": true,
	I1212 00:15:11.021832   48339 command_runner.go:130] >     "enableSelinux": false,
	I1212 00:15:11.021840   48339 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1212 00:15:11.021845   48339 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1212 00:15:11.021852   48339 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1212 00:15:11.021858   48339 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1212 00:15:11.021868   48339 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1212 00:15:11.021873   48339 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1212 00:15:11.021877   48339 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1212 00:15:11.021886   48339 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1212 00:15:11.021890   48339 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1212 00:15:11.021896   48339 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1212 00:15:11.021901   48339 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1212 00:15:11.021907   48339 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1212 00:15:11.021910   48339 command_runner.go:130] >   },
	I1212 00:15:11.021914   48339 command_runner.go:130] >   "features": {
	I1212 00:15:11.021919   48339 command_runner.go:130] >     "supplemental_groups_policy": true
	I1212 00:15:11.021922   48339 command_runner.go:130] >   },
	I1212 00:15:11.021926   48339 command_runner.go:130] >   "golang": "go1.24.9",
	I1212 00:15:11.021938   48339 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1212 00:15:11.021951   48339 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1212 00:15:11.021954   48339 command_runner.go:130] >   "runtimeHandlers": [
	I1212 00:15:11.021957   48339 command_runner.go:130] >     {
	I1212 00:15:11.021961   48339 command_runner.go:130] >       "features": {
	I1212 00:15:11.021973   48339 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1212 00:15:11.021977   48339 command_runner.go:130] >         "user_namespaces": true
	I1212 00:15:11.021984   48339 command_runner.go:130] >       }
	I1212 00:15:11.021991   48339 command_runner.go:130] >     },
	I1212 00:15:11.021996   48339 command_runner.go:130] >     {
	I1212 00:15:11.022000   48339 command_runner.go:130] >       "features": {
	I1212 00:15:11.022006   48339 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1212 00:15:11.022013   48339 command_runner.go:130] >         "user_namespaces": true
	I1212 00:15:11.022016   48339 command_runner.go:130] >       },
	I1212 00:15:11.022021   48339 command_runner.go:130] >       "name": "runc"
	I1212 00:15:11.022026   48339 command_runner.go:130] >     }
	I1212 00:15:11.022029   48339 command_runner.go:130] >   ],
	I1212 00:15:11.022033   48339 command_runner.go:130] >   "status": {
	I1212 00:15:11.022045   48339 command_runner.go:130] >     "conditions": [
	I1212 00:15:11.022048   48339 command_runner.go:130] >       {
	I1212 00:15:11.022055   48339 command_runner.go:130] >         "message": "",
	I1212 00:15:11.022059   48339 command_runner.go:130] >         "reason": "",
	I1212 00:15:11.022065   48339 command_runner.go:130] >         "status": true,
	I1212 00:15:11.022070   48339 command_runner.go:130] >         "type": "RuntimeReady"
	I1212 00:15:11.022073   48339 command_runner.go:130] >       },
	I1212 00:15:11.022076   48339 command_runner.go:130] >       {
	I1212 00:15:11.022083   48339 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1212 00:15:11.022087   48339 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1212 00:15:11.022094   48339 command_runner.go:130] >         "status": false,
	I1212 00:15:11.022099   48339 command_runner.go:130] >         "type": "NetworkReady"
	I1212 00:15:11.022104   48339 command_runner.go:130] >       },
	I1212 00:15:11.022107   48339 command_runner.go:130] >       {
	I1212 00:15:11.022132   48339 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1212 00:15:11.022141   48339 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1212 00:15:11.022149   48339 command_runner.go:130] >         "status": false,
	I1212 00:15:11.022155   48339 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1212 00:15:11.022158   48339 command_runner.go:130] >       }
	I1212 00:15:11.022161   48339 command_runner.go:130] >     ]
	I1212 00:15:11.022164   48339 command_runner.go:130] >   }
	I1212 00:15:11.022166   48339 command_runner.go:130] > }
	I1212 00:15:11.024522   48339 cni.go:84] Creating CNI manager for ""
	I1212 00:15:11.024547   48339 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 00:15:11.024564   48339 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 00:15:11.024607   48339 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-767012 NodeName:functional-767012 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:15:11.024773   48339 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-767012"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:15:11.024850   48339 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 00:15:11.031979   48339 command_runner.go:130] > kubeadm
	I1212 00:15:11.031999   48339 command_runner.go:130] > kubectl
	I1212 00:15:11.032004   48339 command_runner.go:130] > kubelet
	I1212 00:15:11.033031   48339 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 00:15:11.033131   48339 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:15:11.041032   48339 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1212 00:15:11.054723   48339 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 00:15:11.067854   48339 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1212 00:15:11.081373   48339 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:15:11.085014   48339 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1212 00:15:11.085116   48339 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:15:11.226173   48339 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:15:12.035778   48339 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012 for IP: 192.168.49.2
	I1212 00:15:12.035798   48339 certs.go:195] generating shared ca certs ...
	I1212 00:15:12.035830   48339 certs.go:227] acquiring lock for ca certs: {Name:mk18ed2fce74cbc4ee01c0f71e2dbdd98ccce1cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:15:12.035967   48339 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key
	I1212 00:15:12.036010   48339 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key
	I1212 00:15:12.036017   48339 certs.go:257] generating profile certs ...
	I1212 00:15:12.036117   48339 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.key
	I1212 00:15:12.036165   48339 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.key.fcbff5a4
	I1212 00:15:12.036201   48339 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.key
	I1212 00:15:12.036209   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 00:15:12.036224   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 00:15:12.036235   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 00:15:12.036248   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 00:15:12.036258   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 00:15:12.036270   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 00:15:12.036281   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 00:15:12.036294   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 00:15:12.036341   48339 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem (1338 bytes)
	W1212 00:15:12.036372   48339 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290_empty.pem, impossibly tiny 0 bytes
	I1212 00:15:12.036381   48339 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 00:15:12.036409   48339 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem (1082 bytes)
	I1212 00:15:12.036440   48339 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:15:12.036468   48339 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem (1675 bytes)
	I1212 00:15:12.036516   48339 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem (1708 bytes)
	I1212 00:15:12.036546   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem -> /usr/share/ca-certificates/4290.pem
	I1212 00:15:12.036558   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem -> /usr/share/ca-certificates/42902.pem
	I1212 00:15:12.036578   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:15:12.037134   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:15:12.059224   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 00:15:12.079145   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:15:12.096868   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:15:12.114531   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 00:15:12.132828   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 00:15:12.150161   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:15:12.168014   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:15:12.185251   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem --> /usr/share/ca-certificates/4290.pem (1338 bytes)
	I1212 00:15:12.202557   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /usr/share/ca-certificates/42902.pem (1708 bytes)
	I1212 00:15:12.219625   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:15:12.237574   48339 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:15:12.250472   48339 ssh_runner.go:195] Run: openssl version
	I1212 00:15:12.256541   48339 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1212 00:15:12.256947   48339 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4290.pem
	I1212 00:15:12.264387   48339 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4290.pem /etc/ssl/certs/4290.pem
	I1212 00:15:12.271688   48339 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4290.pem
	I1212 00:15:12.275404   48339 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 00:06 /usr/share/ca-certificates/4290.pem
	I1212 00:15:12.275432   48339 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:06 /usr/share/ca-certificates/4290.pem
	I1212 00:15:12.275482   48339 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4290.pem
	I1212 00:15:12.315860   48339 command_runner.go:130] > 51391683
	I1212 00:15:12.316400   48339 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 00:15:12.323656   48339 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42902.pem
	I1212 00:15:12.330945   48339 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42902.pem /etc/ssl/certs/42902.pem
	I1212 00:15:12.339131   48339 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42902.pem
	I1212 00:15:12.343064   48339 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 00:06 /usr/share/ca-certificates/42902.pem
	I1212 00:15:12.343159   48339 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:06 /usr/share/ca-certificates/42902.pem
	I1212 00:15:12.343241   48339 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42902.pem
	I1212 00:15:12.383845   48339 command_runner.go:130] > 3ec20f2e
	I1212 00:15:12.384302   48339 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 00:15:12.391740   48339 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:15:12.398710   48339 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:15:12.406076   48339 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:15:12.409726   48339 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:15:12.409770   48339 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:15:12.409826   48339 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:15:12.450507   48339 command_runner.go:130] > b5213941
	I1212 00:15:12.450926   48339 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:15:12.458188   48339 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:15:12.461873   48339 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:15:12.461949   48339 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1212 00:15:12.461961   48339 command_runner.go:130] > Device: 259,1	Inode: 1311423     Links: 1
	I1212 00:15:12.461969   48339 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 00:15:12.461975   48339 command_runner.go:130] > Access: 2025-12-12 00:11:05.099200071 +0000
	I1212 00:15:12.461979   48339 command_runner.go:130] > Modify: 2025-12-12 00:07:00.969098600 +0000
	I1212 00:15:12.461984   48339 command_runner.go:130] > Change: 2025-12-12 00:07:00.969098600 +0000
	I1212 00:15:12.461989   48339 command_runner.go:130] >  Birth: 2025-12-12 00:07:00.969098600 +0000
	I1212 00:15:12.462077   48339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 00:15:12.504549   48339 command_runner.go:130] > Certificate will not expire
	I1212 00:15:12.505002   48339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 00:15:12.545847   48339 command_runner.go:130] > Certificate will not expire
	I1212 00:15:12.545927   48339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 00:15:12.586405   48339 command_runner.go:130] > Certificate will not expire
	I1212 00:15:12.586767   48339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 00:15:12.629151   48339 command_runner.go:130] > Certificate will not expire
	I1212 00:15:12.629637   48339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 00:15:12.671966   48339 command_runner.go:130] > Certificate will not expire
	I1212 00:15:12.672529   48339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 00:15:12.713858   48339 command_runner.go:130] > Certificate will not expire
	I1212 00:15:12.714272   48339 kubeadm.go:401] StartCluster: {Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:15:12.714367   48339 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1212 00:15:12.714442   48339 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:15:12.749902   48339 cri.go:89] found id: ""
	I1212 00:15:12.750000   48339 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:15:12.759407   48339 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1212 00:15:12.759429   48339 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1212 00:15:12.759437   48339 command_runner.go:130] > /var/lib/minikube/etcd:
	I1212 00:15:12.760379   48339 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 00:15:12.760398   48339 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 00:15:12.760457   48339 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 00:15:12.768161   48339 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:15:12.768602   48339 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-767012" does not appear in /home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 00:15:12.768706   48339 kubeconfig.go:62] /home/jenkins/minikube-integration/22101-2343/kubeconfig needs updating (will repair): [kubeconfig missing "functional-767012" cluster setting kubeconfig missing "functional-767012" context setting]
	I1212 00:15:12.769002   48339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/kubeconfig: {Name:mk3e2cbffa089f0a26351f5f6a49b8ed3b66edd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:15:12.769434   48339 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 00:15:12.769575   48339 kapi.go:59] client config for functional-767012: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt", KeyFile:"/home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.key", CAFile:"/home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 00:15:12.770098   48339 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 00:15:12.770119   48339 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 00:15:12.770125   48339 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 00:15:12.770129   48339 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 00:15:12.770134   48339 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 00:15:12.770402   48339 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 00:15:12.770508   48339 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1212 00:15:12.778529   48339 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1212 00:15:12.778562   48339 kubeadm.go:602] duration metric: took 18.158491ms to restartPrimaryControlPlane
	I1212 00:15:12.778572   48339 kubeadm.go:403] duration metric: took 64.30535ms to StartCluster
	I1212 00:15:12.778619   48339 settings.go:142] acquiring lock: {Name:mk6dd4250df69aeba4752e9f33aeef37272375c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:15:12.778710   48339 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 00:15:12.779343   48339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/kubeconfig: {Name:mk3e2cbffa089f0a26351f5f6a49b8ed3b66edd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:15:12.779578   48339 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1212 00:15:12.779758   48339 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 00:15:12.779798   48339 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 00:15:12.779860   48339 addons.go:70] Setting storage-provisioner=true in profile "functional-767012"
	I1212 00:15:12.779873   48339 addons.go:239] Setting addon storage-provisioner=true in "functional-767012"
	I1212 00:15:12.779899   48339 host.go:66] Checking if "functional-767012" exists ...
	I1212 00:15:12.780379   48339 cli_runner.go:164] Run: docker container inspect functional-767012 --format={{.State.Status}}
	I1212 00:15:12.780789   48339 addons.go:70] Setting default-storageclass=true in profile "functional-767012"
	I1212 00:15:12.780811   48339 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-767012"
	I1212 00:15:12.781090   48339 cli_runner.go:164] Run: docker container inspect functional-767012 --format={{.State.Status}}
	I1212 00:15:12.784774   48339 out.go:179] * Verifying Kubernetes components...
	I1212 00:15:12.788318   48339 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:15:12.822440   48339 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 00:15:12.822619   48339 kapi.go:59] client config for functional-767012: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt", KeyFile:"/home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.key", CAFile:"/home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 00:15:12.822882   48339 addons.go:239] Setting addon default-storageclass=true in "functional-767012"
	I1212 00:15:12.822910   48339 host.go:66] Checking if "functional-767012" exists ...
	I1212 00:15:12.823362   48339 cli_runner.go:164] Run: docker container inspect functional-767012 --format={{.State.Status}}
	I1212 00:15:12.828706   48339 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:15:12.831719   48339 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:12.831746   48339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 00:15:12.831810   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:12.856565   48339 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:12.856586   48339 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:15:12.856663   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:12.891591   48339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:15:12.907113   48339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:15:13.031282   48339 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:15:13.038860   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:13.055219   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:13.785959   48339 node_ready.go:35] waiting up to 6m0s for node "functional-767012" to be "Ready" ...
	I1212 00:15:13.786096   48339 type.go:168] "Request Body" body=""
	I1212 00:15:13.786201   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:13.786332   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:13.786513   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:13.786544   48339 retry.go:31] will retry after 252.334378ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:13.786634   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:13.786678   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:13.786692   48339 retry.go:31] will retry after 187.958053ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:13.786725   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:13.975259   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:14.039772   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:14.044477   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:14.044582   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:14.044648   48339 retry.go:31] will retry after 322.190642ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:14.103040   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:14.103100   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:14.103119   48339 retry.go:31] will retry after 449.616448ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:14.286283   48339 type.go:168] "Request Body" body=""
	I1212 00:15:14.286357   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:14.286666   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:14.367911   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:14.423058   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:14.426726   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:14.426805   48339 retry.go:31] will retry after 304.882295ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:14.552989   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:14.624219   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:14.624296   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:14.624324   48339 retry.go:31] will retry after 431.233251ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:14.732500   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:14.787073   48339 type.go:168] "Request Body" body=""
	I1212 00:15:14.787160   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:14.787408   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:14.793570   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:14.793617   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:14.793638   48339 retry.go:31] will retry after 814.242182ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:15.055819   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:15.115988   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:15.119844   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:15.119920   48339 retry.go:31] will retry after 1.173578041s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:15.287015   48339 type.go:168] "Request Body" body=""
	I1212 00:15:15.287127   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:15.287435   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:15.608995   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:15.668352   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:15.672074   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:15.672106   48339 retry.go:31] will retry after 987.735436ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:15.786224   48339 type.go:168] "Request Body" body=""
	I1212 00:15:15.786336   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:15.786676   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:15.786781   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:16.286218   48339 type.go:168] "Request Body" body=""
	I1212 00:15:16.286309   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:16.286618   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:16.293963   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:16.350242   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:16.354044   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:16.354074   48339 retry.go:31] will retry after 1.703488512s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:16.660633   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:16.720806   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:16.720847   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:16.720866   48339 retry.go:31] will retry after 1.717481089s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:16.787045   48339 type.go:168] "Request Body" body=""
	I1212 00:15:16.787165   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:16.787500   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:17.287197   48339 type.go:168] "Request Body" body=""
	I1212 00:15:17.287287   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:17.287663   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:17.786193   48339 type.go:168] "Request Body" body=""
	I1212 00:15:17.786301   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:17.786622   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:18.058032   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:18.119712   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:18.119758   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:18.119777   48339 retry.go:31] will retry after 2.564790813s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:18.286189   48339 type.go:168] "Request Body" body=""
	I1212 00:15:18.286256   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:18.286531   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:18.286571   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:18.438948   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:18.492343   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:18.495818   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:18.495853   48339 retry.go:31] will retry after 3.474173077s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:18.786235   48339 type.go:168] "Request Body" body=""
	I1212 00:15:18.786319   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:18.786633   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:19.286373   48339 type.go:168] "Request Body" body=""
	I1212 00:15:19.286489   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:19.286915   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:19.786192   48339 type.go:168] "Request Body" body=""
	I1212 00:15:19.786262   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:19.786531   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:20.286266   48339 type.go:168] "Request Body" body=""
	I1212 00:15:20.286338   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:20.286671   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:20.286730   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:20.685395   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:20.744336   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:20.744377   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:20.744397   48339 retry.go:31] will retry after 3.068053389s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:20.786556   48339 type.go:168] "Request Body" body=""
	I1212 00:15:20.786632   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:20.787017   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:21.286794   48339 type.go:168] "Request Body" body=""
	I1212 00:15:21.286863   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:21.287178   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:21.786938   48339 type.go:168] "Request Body" body=""
	I1212 00:15:21.787095   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:21.787425   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:21.970778   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:22.029300   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:22.033382   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:22.033416   48339 retry.go:31] will retry after 3.143683139s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:22.286887   48339 type.go:168] "Request Body" body=""
	I1212 00:15:22.286963   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:22.287298   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:22.287349   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:22.786122   48339 type.go:168] "Request Body" body=""
	I1212 00:15:22.786203   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:22.786515   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:23.286522   48339 type.go:168] "Request Body" body=""
	I1212 00:15:23.286595   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:23.286902   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:23.786669   48339 type.go:168] "Request Body" body=""
	I1212 00:15:23.786750   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:23.787071   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:23.813245   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:23.872447   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:23.872484   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:23.872503   48339 retry.go:31] will retry after 4.295118946s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:24.286878   48339 type.go:168] "Request Body" body=""
	I1212 00:15:24.286966   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:24.287236   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:24.787020   48339 type.go:168] "Request Body" body=""
	I1212 00:15:24.787113   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:24.787396   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:24.787455   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:25.178129   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:25.240141   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:25.243777   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:25.243806   48339 retry.go:31] will retry after 9.168145583s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:25.286119   48339 type.go:168] "Request Body" body=""
	I1212 00:15:25.286212   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:25.286559   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:25.787134   48339 type.go:168] "Request Body" body=""
	I1212 00:15:25.787314   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:25.787683   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:26.286268   48339 type.go:168] "Request Body" body=""
	I1212 00:15:26.286357   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:26.286692   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:26.786194   48339 type.go:168] "Request Body" body=""
	I1212 00:15:26.786286   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:26.786601   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:27.286932   48339 type.go:168] "Request Body" body=""
	I1212 00:15:27.287015   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:27.287267   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:27.287315   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:27.787077   48339 type.go:168] "Request Body" body=""
	I1212 00:15:27.787176   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:27.787513   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:28.168008   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:28.231881   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:28.231917   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:28.231944   48339 retry.go:31] will retry after 6.344313185s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:28.286314   48339 type.go:168] "Request Body" body=""
	I1212 00:15:28.286400   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:28.286700   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:28.786192   48339 type.go:168] "Request Body" body=""
	I1212 00:15:28.786267   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:28.786531   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:29.286238   48339 type.go:168] "Request Body" body=""
	I1212 00:15:29.286308   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:29.286623   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:29.786295   48339 type.go:168] "Request Body" body=""
	I1212 00:15:29.786368   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:29.786689   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:29.786753   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:30.287110   48339 type.go:168] "Request Body" body=""
	I1212 00:15:30.287175   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:30.287426   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:30.786872   48339 type.go:168] "Request Body" body=""
	I1212 00:15:30.786960   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:30.787297   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:31.286942   48339 type.go:168] "Request Body" body=""
	I1212 00:15:31.287032   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:31.287368   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:31.786980   48339 type.go:168] "Request Body" body=""
	I1212 00:15:31.787074   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:31.787418   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:31.787478   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:32.286186   48339 type.go:168] "Request Body" body=""
	I1212 00:15:32.286271   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:32.286599   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:32.786430   48339 type.go:168] "Request Body" body=""
	I1212 00:15:32.786534   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:32.786856   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:33.286674   48339 type.go:168] "Request Body" body=""
	I1212 00:15:33.286767   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:33.287049   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:33.786796   48339 type.go:168] "Request Body" body=""
	I1212 00:15:33.786868   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:33.787225   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:34.286903   48339 type.go:168] "Request Body" body=""
	I1212 00:15:34.287005   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:34.287348   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:34.287421   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:34.412873   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:34.471886   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:34.475429   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:34.475459   48339 retry.go:31] will retry after 5.427832253s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:34.576727   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:34.645023   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:34.645064   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:34.645084   48339 retry.go:31] will retry after 14.315988892s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:34.786162   48339 type.go:168] "Request Body" body=""
	I1212 00:15:34.786245   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:34.786506   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:35.286256   48339 type.go:168] "Request Body" body=""
	I1212 00:15:35.286369   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:35.286766   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:35.786480   48339 type.go:168] "Request Body" body=""
	I1212 00:15:35.786551   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:35.786861   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:36.286546   48339 type.go:168] "Request Body" body=""
	I1212 00:15:36.286613   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:36.286890   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:36.786199   48339 type.go:168] "Request Body" body=""
	I1212 00:15:36.786309   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:36.786640   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:36.786704   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:37.286243   48339 type.go:168] "Request Body" body=""
	I1212 00:15:37.286323   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:37.286640   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:37.786331   48339 type.go:168] "Request Body" body=""
	I1212 00:15:37.786426   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:37.786691   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:38.286739   48339 type.go:168] "Request Body" body=""
	I1212 00:15:38.286834   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:38.287212   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:38.787067   48339 type.go:168] "Request Body" body=""
	I1212 00:15:38.787165   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:38.787505   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:38.787556   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:39.286897   48339 type.go:168] "Request Body" body=""
	I1212 00:15:39.286974   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:39.287246   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:39.787072   48339 type.go:168] "Request Body" body=""
	I1212 00:15:39.787155   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:39.787481   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:39.903977   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:39.961517   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:39.961553   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:39.961584   48339 retry.go:31] will retry after 9.825060256s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:40.286904   48339 type.go:168] "Request Body" body=""
	I1212 00:15:40.287016   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:40.287324   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:40.786920   48339 type.go:168] "Request Body" body=""
	I1212 00:15:40.787007   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:40.787265   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:41.287079   48339 type.go:168] "Request Body" body=""
	I1212 00:15:41.287171   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:41.287483   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:41.287535   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:41.786177   48339 type.go:168] "Request Body" body=""
	I1212 00:15:41.786245   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:41.786586   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:42.286210   48339 type.go:168] "Request Body" body=""
	I1212 00:15:42.286304   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:42.286665   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:42.786373   48339 type.go:168] "Request Body" body=""
	I1212 00:15:42.786449   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:42.786735   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:43.286695   48339 type.go:168] "Request Body" body=""
	I1212 00:15:43.286781   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:43.287063   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:43.786792   48339 type.go:168] "Request Body" body=""
	I1212 00:15:43.786867   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:43.787142   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:43.787197   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:44.286976   48339 type.go:168] "Request Body" body=""
	I1212 00:15:44.287083   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:44.287398   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:44.786120   48339 type.go:168] "Request Body" body=""
	I1212 00:15:44.786194   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:44.786513   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:45.286282   48339 type.go:168] "Request Body" body=""
	I1212 00:15:45.286447   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:45.286824   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:45.786533   48339 type.go:168] "Request Body" body=""
	I1212 00:15:45.786632   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:45.786951   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:46.286792   48339 type.go:168] "Request Body" body=""
	I1212 00:15:46.286884   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:46.287186   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:46.287237   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:46.786874   48339 type.go:168] "Request Body" body=""
	I1212 00:15:46.786956   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:46.787268   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:47.287109   48339 type.go:168] "Request Body" body=""
	I1212 00:15:47.287201   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:47.287499   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:47.786233   48339 type.go:168] "Request Body" body=""
	I1212 00:15:47.786303   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:47.786629   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:48.286436   48339 type.go:168] "Request Body" body=""
	I1212 00:15:48.286503   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:48.286772   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:48.786216   48339 type.go:168] "Request Body" body=""
	I1212 00:15:48.786290   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:48.786671   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:48.786725   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:48.962079   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:49.024775   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:49.024824   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:49.024842   48339 retry.go:31] will retry after 15.053349185s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:49.286133   48339 type.go:168] "Request Body" body=""
	I1212 00:15:49.286218   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:49.286771   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:49.786188   48339 type.go:168] "Request Body" body=""
	I1212 00:15:49.786266   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:49.786639   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:49.786790   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:49.873069   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:49.873108   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:49.873126   48339 retry.go:31] will retry after 17.371130847s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:50.286878   48339 type.go:168] "Request Body" body=""
	I1212 00:15:50.286961   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:50.287310   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:50.787122   48339 type.go:168] "Request Body" body=""
	I1212 00:15:50.787202   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:50.787523   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:50.787579   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:51.286912   48339 type.go:168] "Request Body" body=""
	I1212 00:15:51.286981   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:51.287298   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:51.787059   48339 type.go:168] "Request Body" body=""
	I1212 00:15:51.787135   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:51.787456   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:52.286151   48339 type.go:168] "Request Body" body=""
	I1212 00:15:52.286226   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:52.286553   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:52.786336   48339 type.go:168] "Request Body" body=""
	I1212 00:15:52.786407   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:52.786699   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:53.286548   48339 type.go:168] "Request Body" body=""
	I1212 00:15:53.286619   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:53.286939   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:53.287009   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:53.786505   48339 type.go:168] "Request Body" body=""
	I1212 00:15:53.786577   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:53.786912   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:54.286719   48339 type.go:168] "Request Body" body=""
	I1212 00:15:54.286786   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:54.287059   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:54.786836   48339 type.go:168] "Request Body" body=""
	I1212 00:15:54.786933   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:54.787274   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:55.287094   48339 type.go:168] "Request Body" body=""
	I1212 00:15:55.287171   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:55.287511   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:55.287570   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:55.786152   48339 type.go:168] "Request Body" body=""
	I1212 00:15:55.786220   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:55.786474   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:56.286213   48339 type.go:168] "Request Body" body=""
	I1212 00:15:56.286312   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:56.286631   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:56.786168   48339 type.go:168] "Request Body" body=""
	I1212 00:15:56.786239   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:56.786561   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:57.287075   48339 type.go:168] "Request Body" body=""
	I1212 00:15:57.287147   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:57.287400   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:57.787153   48339 type.go:168] "Request Body" body=""
	I1212 00:15:57.787225   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:57.787534   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:57.787585   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:58.286375   48339 type.go:168] "Request Body" body=""
	I1212 00:15:58.286450   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:58.286783   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:58.786215   48339 type.go:168] "Request Body" body=""
	I1212 00:15:58.786282   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:58.786594   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:59.286241   48339 type.go:168] "Request Body" body=""
	I1212 00:15:59.286312   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:59.286622   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:59.786317   48339 type.go:168] "Request Body" body=""
	I1212 00:15:59.786388   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:59.786719   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:00.292274   48339 type.go:168] "Request Body" body=""
	I1212 00:16:00.292358   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:00.292654   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:00.292703   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:00.786205   48339 type.go:168] "Request Body" body=""
	I1212 00:16:00.786286   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:00.786644   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:01.286348   48339 type.go:168] "Request Body" body=""
	I1212 00:16:01.286432   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:01.286773   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:01.787146   48339 type.go:168] "Request Body" body=""
	I1212 00:16:01.787221   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:01.787510   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:02.286209   48339 type.go:168] "Request Body" body=""
	I1212 00:16:02.286300   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:02.286617   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:02.786467   48339 type.go:168] "Request Body" body=""
	I1212 00:16:02.786540   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:02.786883   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:02.786938   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:03.286672   48339 type.go:168] "Request Body" body=""
	I1212 00:16:03.286737   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:03.287012   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:03.786797   48339 type.go:168] "Request Body" body=""
	I1212 00:16:03.786868   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:03.787218   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:04.078782   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:16:04.137731   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:16:04.141181   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:16:04.141215   48339 retry.go:31] will retry after 17.411337884s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:16:04.286486   48339 type.go:168] "Request Body" body=""
	I1212 00:16:04.286564   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:04.286889   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:04.786185   48339 type.go:168] "Request Body" body=""
	I1212 00:16:04.786276   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:04.786662   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:05.286261   48339 type.go:168] "Request Body" body=""
	I1212 00:16:05.286336   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:05.286651   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:05.286703   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:05.786375   48339 type.go:168] "Request Body" body=""
	I1212 00:16:05.786467   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:05.786794   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:06.286188   48339 type.go:168] "Request Body" body=""
	I1212 00:16:06.286265   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:06.286589   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:06.786260   48339 type.go:168] "Request Body" body=""
	I1212 00:16:06.786341   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:06.786641   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:07.245320   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:16:07.286783   48339 type.go:168] "Request Body" body=""
	I1212 00:16:07.286895   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:07.287194   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:07.287250   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:07.304749   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:16:07.304789   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:16:07.304807   48339 retry.go:31] will retry after 24.953429831s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:16:07.787063   48339 type.go:168] "Request Body" body=""
	I1212 00:16:07.787138   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:07.787437   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:08.286404   48339 type.go:168] "Request Body" body=""
	I1212 00:16:08.286476   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:08.286783   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:08.786218   48339 type.go:168] "Request Body" body=""
	I1212 00:16:08.786293   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:08.786671   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:09.286981   48339 type.go:168] "Request Body" body=""
	I1212 00:16:09.287066   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:09.287329   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:09.287373   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:09.787100   48339 type.go:168] "Request Body" body=""
	I1212 00:16:09.787195   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:09.787534   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:10.286235   48339 type.go:168] "Request Body" body=""
	I1212 00:16:10.286321   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:10.286701   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:10.786206   48339 type.go:168] "Request Body" body=""
	I1212 00:16:10.786294   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:10.786608   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:11.286206   48339 type.go:168] "Request Body" body=""
	I1212 00:16:11.286280   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:11.286613   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:11.786187   48339 type.go:168] "Request Body" body=""
	I1212 00:16:11.786279   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:11.786620   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:11.786679   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:12.286942   48339 type.go:168] "Request Body" body=""
	I1212 00:16:12.287031   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:12.287292   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:12.786305   48339 type.go:168] "Request Body" body=""
	I1212 00:16:12.786379   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:12.786714   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:13.286636   48339 type.go:168] "Request Body" body=""
	I1212 00:16:13.286735   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:13.287061   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:13.786837   48339 type.go:168] "Request Body" body=""
	I1212 00:16:13.786905   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:13.787175   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:13.787217   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:14.286785   48339 type.go:168] "Request Body" body=""
	I1212 00:16:14.286860   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:14.287199   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:14.786985   48339 type.go:168] "Request Body" body=""
	I1212 00:16:14.787080   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:14.787391   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:15.287017   48339 type.go:168] "Request Body" body=""
	I1212 00:16:15.287092   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:15.287365   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:15.786104   48339 type.go:168] "Request Body" body=""
	I1212 00:16:15.786194   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:15.786515   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:16.286214   48339 type.go:168] "Request Body" body=""
	I1212 00:16:16.286312   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:16.286611   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:16.286662   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:16.787101   48339 type.go:168] "Request Body" body=""
	I1212 00:16:16.787177   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:16.787436   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:17.286203   48339 type.go:168] "Request Body" body=""
	I1212 00:16:17.286282   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:17.286588   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:17.786199   48339 type.go:168] "Request Body" body=""
	I1212 00:16:17.786273   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:17.786616   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:18.286463   48339 type.go:168] "Request Body" body=""
	I1212 00:16:18.286538   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:18.286889   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:18.286938   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:18.786189   48339 type.go:168] "Request Body" body=""
	I1212 00:16:18.786282   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:18.786626   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:19.286360   48339 type.go:168] "Request Body" body=""
	I1212 00:16:19.286434   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:19.286751   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:19.786162   48339 type.go:168] "Request Body" body=""
	I1212 00:16:19.786239   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:19.786514   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:20.286225   48339 type.go:168] "Request Body" body=""
	I1212 00:16:20.286301   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:20.286620   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:20.786210   48339 type.go:168] "Request Body" body=""
	I1212 00:16:20.786283   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:20.786562   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:20.786610   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:21.286154   48339 type.go:168] "Request Body" body=""
	I1212 00:16:21.286236   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:21.286508   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:21.552920   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:16:21.609312   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:16:21.612881   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:16:21.612910   48339 retry.go:31] will retry after 24.114548677s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:16:21.786128   48339 type.go:168] "Request Body" body=""
	I1212 00:16:21.786221   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:21.786547   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:22.286255   48339 type.go:168] "Request Body" body=""
	I1212 00:16:22.286336   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:22.286677   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:22.786457   48339 type.go:168] "Request Body" body=""
	I1212 00:16:22.786525   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:22.786771   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:22.786820   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:23.286766   48339 type.go:168] "Request Body" body=""
	I1212 00:16:23.286841   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:23.287234   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:23.787069   48339 type.go:168] "Request Body" body=""
	I1212 00:16:23.787143   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:23.787481   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:24.286171   48339 type.go:168] "Request Body" body=""
	I1212 00:16:24.286252   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:24.286533   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:24.786238   48339 type.go:168] "Request Body" body=""
	I1212 00:16:24.786310   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:24.786625   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:25.286352   48339 type.go:168] "Request Body" body=""
	I1212 00:16:25.286433   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:25.286738   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:25.286790   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:25.786139   48339 type.go:168] "Request Body" body=""
	I1212 00:16:25.786227   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:25.786511   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:26.286200   48339 type.go:168] "Request Body" body=""
	I1212 00:16:26.286292   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:26.286614   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:26.786309   48339 type.go:168] "Request Body" body=""
	I1212 00:16:26.786416   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:26.786728   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:27.286245   48339 type.go:168] "Request Body" body=""
	I1212 00:16:27.286316   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:27.286597   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:27.786283   48339 type.go:168] "Request Body" body=""
	I1212 00:16:27.786355   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:27.786690   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:27.786745   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:28.286519   48339 type.go:168] "Request Body" body=""
	I1212 00:16:28.286594   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:28.286931   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:28.786692   48339 type.go:168] "Request Body" body=""
	I1212 00:16:28.786765   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:28.787040   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:29.286807   48339 type.go:168] "Request Body" body=""
	I1212 00:16:29.286879   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:29.287246   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:29.786890   48339 type.go:168] "Request Body" body=""
	I1212 00:16:29.786966   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:29.787276   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:29.787321   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:30.287063   48339 type.go:168] "Request Body" body=""
	I1212 00:16:30.287137   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:30.287393   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:30.787118   48339 type.go:168] "Request Body" body=""
	I1212 00:16:30.787201   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:30.787551   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:31.286150   48339 type.go:168] "Request Body" body=""
	I1212 00:16:31.286271   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:31.286606   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:31.786159   48339 type.go:168] "Request Body" body=""
	I1212 00:16:31.786233   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:31.786502   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:32.259311   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:16:32.286776   48339 type.go:168] "Request Body" body=""
	I1212 00:16:32.286852   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:32.287141   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:32.287191   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:32.315690   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:16:32.319144   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:16:32.319251   48339 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 00:16:32.786146   48339 type.go:168] "Request Body" body=""
	I1212 00:16:32.786230   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:32.786571   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:33.286348   48339 type.go:168] "Request Body" body=""
	I1212 00:16:33.286423   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:33.286668   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:33.786187   48339 type.go:168] "Request Body" body=""
	I1212 00:16:33.786262   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:33.786597   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:34.286351   48339 type.go:168] "Request Body" body=""
	I1212 00:16:34.286425   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:34.286777   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:34.787084   48339 type.go:168] "Request Body" body=""
	I1212 00:16:34.787156   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:34.787405   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:34.787444   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:35.286102   48339 type.go:168] "Request Body" body=""
	I1212 00:16:35.286177   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:35.286533   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:35.786215   48339 type.go:168] "Request Body" body=""
	I1212 00:16:35.786285   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:35.786632   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:36.287087   48339 type.go:168] "Request Body" body=""
	I1212 00:16:36.287160   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:36.287418   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:36.786103   48339 type.go:168] "Request Body" body=""
	I1212 00:16:36.786193   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:36.786526   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:37.286129   48339 type.go:168] "Request Body" body=""
	I1212 00:16:37.286202   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:37.286544   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:37.286600   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:37.787026   48339 type.go:168] "Request Body" body=""
	I1212 00:16:37.787100   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:37.787357   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:38.286531   48339 type.go:168] "Request Body" body=""
	I1212 00:16:38.286611   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:38.286935   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:38.786684   48339 type.go:168] "Request Body" body=""
	I1212 00:16:38.786754   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:38.787096   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:39.286816   48339 type.go:168] "Request Body" body=""
	I1212 00:16:39.286887   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:39.287147   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:39.287187   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:39.786891   48339 type.go:168] "Request Body" body=""
	I1212 00:16:39.786969   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:39.787334   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:40.287018   48339 type.go:168] "Request Body" body=""
	I1212 00:16:40.287113   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:40.287426   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:40.786868   48339 type.go:168] "Request Body" body=""
	I1212 00:16:40.786934   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:40.787251   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:41.287087   48339 type.go:168] "Request Body" body=""
	I1212 00:16:41.287180   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:41.287508   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:41.287561   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:41.786226   48339 type.go:168] "Request Body" body=""
	I1212 00:16:41.786304   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:41.786661   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:42.286381   48339 type.go:168] "Request Body" body=""
	I1212 00:16:42.286463   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:42.286744   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:42.786456   48339 type.go:168] "Request Body" body=""
	I1212 00:16:42.786532   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:42.786873   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:43.286753   48339 type.go:168] "Request Body" body=""
	I1212 00:16:43.286834   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:43.287195   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:43.786972   48339 type.go:168] "Request Body" body=""
	I1212 00:16:43.787061   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:43.787340   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:43.787388   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:44.287150   48339 type.go:168] "Request Body" body=""
	I1212 00:16:44.287228   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:44.287570   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:44.786168   48339 type.go:168] "Request Body" body=""
	I1212 00:16:44.786239   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:44.786580   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:45.286154   48339 type.go:168] "Request Body" body=""
	I1212 00:16:45.286221   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:45.286507   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:45.728277   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:16:45.786458   48339 type.go:168] "Request Body" body=""
	I1212 00:16:45.786536   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:45.786800   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:45.788347   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:16:45.788381   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:16:45.788458   48339 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 00:16:45.791789   48339 out.go:179] * Enabled addons: 
	I1212 00:16:45.795459   48339 addons.go:530] duration metric: took 1m33.015656607s for enable addons: enabled=[]
	I1212 00:16:46.287010   48339 type.go:168] "Request Body" body=""
	I1212 00:16:46.287081   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:46.287404   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:46.287462   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:46.786103   48339 type.go:168] "Request Body" body=""
	I1212 00:16:46.786175   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:46.786467   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:47.286190   48339 type.go:168] "Request Body" body=""
	I1212 00:16:47.286259   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:47.286575   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:47.786211   48339 type.go:168] "Request Body" body=""
	I1212 00:16:47.786307   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:47.786638   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:48.286474   48339 type.go:168] "Request Body" body=""
	I1212 00:16:48.286546   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:48.286806   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:48.786468   48339 type.go:168] "Request Body" body=""
	I1212 00:16:48.786549   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:48.786891   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:48.786943   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:49.286477   48339 type.go:168] "Request Body" body=""
	I1212 00:16:49.286551   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:49.286848   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:49.786149   48339 type.go:168] "Request Body" body=""
	I1212 00:16:49.786225   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:49.786558   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:50.286220   48339 type.go:168] "Request Body" body=""
	I1212 00:16:50.286298   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:50.286632   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:50.786373   48339 type.go:168] "Request Body" body=""
	I1212 00:16:50.786482   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:50.786811   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:51.287106   48339 type.go:168] "Request Body" body=""
	I1212 00:16:51.287186   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:51.287452   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:51.287504   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:51.786168   48339 type.go:168] "Request Body" body=""
	I1212 00:16:51.786246   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:51.786652   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:52.286230   48339 type.go:168] "Request Body" body=""
	I1212 00:16:52.286316   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:52.286605   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:52.786447   48339 type.go:168] "Request Body" body=""
	I1212 00:16:52.786524   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:52.786794   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:53.286795   48339 type.go:168] "Request Body" body=""
	I1212 00:16:53.286881   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:53.287250   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:53.786893   48339 type.go:168] "Request Body" body=""
	I1212 00:16:53.786965   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:53.787310   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:53.787368   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:54.287009   48339 type.go:168] "Request Body" body=""
	I1212 00:16:54.287074   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:54.287399   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:54.787136   48339 type.go:168] "Request Body" body=""
	I1212 00:16:54.787210   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:54.787556   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:55.286167   48339 type.go:168] "Request Body" body=""
	I1212 00:16:55.286259   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:55.286627   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:55.787083   48339 type.go:168] "Request Body" body=""
	I1212 00:16:55.787159   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:55.787400   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:55.787438   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:56.286110   48339 type.go:168] "Request Body" body=""
	I1212 00:16:56.286192   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:56.286559   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:56.786120   48339 type.go:168] "Request Body" body=""
	I1212 00:16:56.786194   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:56.786507   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:57.286159   48339 type.go:168] "Request Body" body=""
	I1212 00:16:57.286235   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:57.286541   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:57.786199   48339 type.go:168] "Request Body" body=""
	I1212 00:16:57.786273   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:57.786608   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:58.286384   48339 type.go:168] "Request Body" body=""
	I1212 00:16:58.286456   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:58.286786   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:58.286842   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:58.786113   48339 type.go:168] "Request Body" body=""
	I1212 00:16:58.786195   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:58.786436   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:59.286107   48339 type.go:168] "Request Body" body=""
	I1212 00:16:59.286184   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:59.286539   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:59.786120   48339 type.go:168] "Request Body" body=""
	I1212 00:16:59.786208   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:59.786557   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:00.301305   48339 type.go:168] "Request Body" body=""
	I1212 00:17:00.301394   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:00.301705   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:00.301755   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:00.786915   48339 type.go:168] "Request Body" body=""
	I1212 00:17:00.787023   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:00.787365   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:01.286116   48339 type.go:168] "Request Body" body=""
	I1212 00:17:01.286201   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:01.286498   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:01.786331   48339 type.go:168] "Request Body" body=""
	I1212 00:17:01.786455   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:01.787063   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:02.286594   48339 type.go:168] "Request Body" body=""
	I1212 00:17:02.286683   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:02.287073   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:02.786476   48339 type.go:168] "Request Body" body=""
	I1212 00:17:02.786554   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:02.786843   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:02.786901   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:03.286851   48339 type.go:168] "Request Body" body=""
	I1212 00:17:03.286949   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:03.287380   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:03.787098   48339 type.go:168] "Request Body" body=""
	I1212 00:17:03.787174   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:03.787557   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:04.286231   48339 type.go:168] "Request Body" body=""
	I1212 00:17:04.286326   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:04.286645   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:04.786397   48339 type.go:168] "Request Body" body=""
	I1212 00:17:04.786491   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:04.786849   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:05.286553   48339 type.go:168] "Request Body" body=""
	I1212 00:17:05.286637   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:05.286984   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:05.287068   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:05.786188   48339 type.go:168] "Request Body" body=""
	I1212 00:17:05.786271   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:05.786551   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:06.286285   48339 type.go:168] "Request Body" body=""
	I1212 00:17:06.286367   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:06.286754   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:06.786511   48339 type.go:168] "Request Body" body=""
	I1212 00:17:06.786601   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:06.786964   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:07.286708   48339 type.go:168] "Request Body" body=""
	I1212 00:17:07.286779   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:07.287068   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:07.287118   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:07.786821   48339 type.go:168] "Request Body" body=""
	I1212 00:17:07.786901   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:07.787214   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:08.286844   48339 type.go:168] "Request Body" body=""
	I1212 00:17:08.286917   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:08.288380   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1212 00:17:08.786934   48339 type.go:168] "Request Body" body=""
	I1212 00:17:08.787026   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:08.787269   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:09.287048   48339 type.go:168] "Request Body" body=""
	I1212 00:17:09.287121   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:09.287442   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:09.287495   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:09.786147   48339 type.go:168] "Request Body" body=""
	I1212 00:17:09.786230   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:09.786586   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:10.286883   48339 type.go:168] "Request Body" body=""
	I1212 00:17:10.286956   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:10.287243   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:10.786958   48339 type.go:168] "Request Body" body=""
	I1212 00:17:10.787045   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:10.787380   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:11.287044   48339 type.go:168] "Request Body" body=""
	I1212 00:17:11.287119   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:11.287444   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:11.786125   48339 type.go:168] "Request Body" body=""
	I1212 00:17:11.786193   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:11.786444   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:11.786489   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:12.286149   48339 type.go:168] "Request Body" body=""
	I1212 00:17:12.286229   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:12.286580   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:12.786352   48339 type.go:168] "Request Body" body=""
	I1212 00:17:12.786428   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:12.786688   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:13.286596   48339 type.go:168] "Request Body" body=""
	I1212 00:17:13.286663   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:13.286919   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:13.786173   48339 type.go:168] "Request Body" body=""
	I1212 00:17:13.786241   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:13.786564   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:13.786616   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:14.286270   48339 type.go:168] "Request Body" body=""
	I1212 00:17:14.286348   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:14.286675   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:14.786352   48339 type.go:168] "Request Body" body=""
	I1212 00:17:14.786428   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:14.786687   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:15.286227   48339 type.go:168] "Request Body" body=""
	I1212 00:17:15.286303   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:15.286628   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:15.786172   48339 type.go:168] "Request Body" body=""
	I1212 00:17:15.786249   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:15.786573   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:16.286101   48339 type.go:168] "Request Body" body=""
	I1212 00:17:16.286166   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:16.286405   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:16.286442   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:16.786141   48339 type.go:168] "Request Body" body=""
	I1212 00:17:16.786209   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:16.786499   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:17.286249   48339 type.go:168] "Request Body" body=""
	I1212 00:17:17.286330   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:17.286684   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:17.786983   48339 type.go:168] "Request Body" body=""
	I1212 00:17:17.787073   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:17.787361   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:18.286161   48339 type.go:168] "Request Body" body=""
	I1212 00:17:18.286235   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:18.286595   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:18.286655   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:18.786181   48339 type.go:168] "Request Body" body=""
	I1212 00:17:18.786265   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:18.786618   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:19.286155   48339 type.go:168] "Request Body" body=""
	I1212 00:17:19.286234   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:19.286527   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:19.786171   48339 type.go:168] "Request Body" body=""
	I1212 00:17:19.786268   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:19.786580   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:20.286265   48339 type.go:168] "Request Body" body=""
	I1212 00:17:20.286345   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:20.286667   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:20.286729   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:20.786253   48339 type.go:168] "Request Body" body=""
	I1212 00:17:20.786335   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:20.786585   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:21.286248   48339 type.go:168] "Request Body" body=""
	I1212 00:17:21.286349   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:21.286645   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:21.786360   48339 type.go:168] "Request Body" body=""
	I1212 00:17:21.786432   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:21.786770   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:22.286447   48339 type.go:168] "Request Body" body=""
	I1212 00:17:22.286522   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:22.286821   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:22.286872   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:22.786477   48339 type.go:168] "Request Body" body=""
	I1212 00:17:22.786551   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:22.786870   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:23.286630   48339 type.go:168] "Request Body" body=""
	I1212 00:17:23.286708   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:23.287045   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:23.786799   48339 type.go:168] "Request Body" body=""
	I1212 00:17:23.786866   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:23.787137   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:24.286984   48339 type.go:168] "Request Body" body=""
	I1212 00:17:24.287110   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:24.287379   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:24.287422   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:24.787166   48339 type.go:168] "Request Body" body=""
	I1212 00:17:24.787236   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:24.787551   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:25.286131   48339 type.go:168] "Request Body" body=""
	I1212 00:17:25.286198   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:25.286515   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:25.786175   48339 type.go:168] "Request Body" body=""
	I1212 00:17:25.786258   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:25.786585   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:26.286285   48339 type.go:168] "Request Body" body=""
	I1212 00:17:26.286371   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:26.286713   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:26.786399   48339 type.go:168] "Request Body" body=""
	I1212 00:17:26.786473   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:26.786722   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:26.786769   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:27.286222   48339 type.go:168] "Request Body" body=""
	I1212 00:17:27.286300   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:27.286683   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:27.786241   48339 type.go:168] "Request Body" body=""
	I1212 00:17:27.786319   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:27.786666   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:28.286480   48339 type.go:168] "Request Body" body=""
	I1212 00:17:28.286553   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:28.286814   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:28.786177   48339 type.go:168] "Request Body" body=""
	I1212 00:17:28.786245   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:28.786593   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:29.286294   48339 type.go:168] "Request Body" body=""
	I1212 00:17:29.286373   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:29.286698   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:29.286749   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:29.787059   48339 type.go:168] "Request Body" body=""
	I1212 00:17:29.787135   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:29.787388   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:30.287153   48339 type.go:168] "Request Body" body=""
	I1212 00:17:30.287233   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:30.287571   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:30.786130   48339 type.go:168] "Request Body" body=""
	I1212 00:17:30.786208   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:30.786533   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:31.286233   48339 type.go:168] "Request Body" body=""
	I1212 00:17:31.286304   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:31.286552   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:31.786198   48339 type.go:168] "Request Body" body=""
	I1212 00:17:31.786272   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:31.786658   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:31.786711   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:32.286228   48339 type.go:168] "Request Body" body=""
	I1212 00:17:32.286302   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:32.286672   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:32.786431   48339 type.go:168] "Request Body" body=""
	I1212 00:17:32.786501   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:32.786749   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:33.286661   48339 type.go:168] "Request Body" body=""
	I1212 00:17:33.286739   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:33.287070   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:33.786835   48339 type.go:168] "Request Body" body=""
	I1212 00:17:33.786916   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:33.787267   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:33.787323   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:34.287052   48339 type.go:168] "Request Body" body=""
	I1212 00:17:34.287118   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:34.287368   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:34.787078   48339 type.go:168] "Request Body" body=""
	I1212 00:17:34.787151   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:34.787466   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:35.286168   48339 type.go:168] "Request Body" body=""
	I1212 00:17:35.286247   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:35.286575   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:35.787150   48339 type.go:168] "Request Body" body=""
	I1212 00:17:35.787215   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:35.787459   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:35.787500   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:36.286166   48339 type.go:168] "Request Body" body=""
	I1212 00:17:36.286238   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:36.286556   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:36.786086   48339 type.go:168] "Request Body" body=""
	I1212 00:17:36.786158   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:36.786493   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:37.287083   48339 type.go:168] "Request Body" body=""
	I1212 00:17:37.287149   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:37.287400   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:37.786113   48339 type.go:168] "Request Body" body=""
	I1212 00:17:37.786187   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:37.786493   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:38.286449   48339 type.go:168] "Request Body" body=""
	I1212 00:17:38.286532   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:38.286863   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:38.286918   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:38.786428   48339 type.go:168] "Request Body" body=""
	I1212 00:17:38.786493   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:38.786739   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:39.286231   48339 type.go:168] "Request Body" body=""
	I1212 00:17:39.286328   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:39.286669   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:39.786191   48339 type.go:168] "Request Body" body=""
	I1212 00:17:39.786261   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:39.786574   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:40.286097   48339 type.go:168] "Request Body" body=""
	I1212 00:17:40.286176   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:40.286475   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:40.786241   48339 type.go:168] "Request Body" body=""
	I1212 00:17:40.786314   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:40.786667   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:40.786722   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:41.286407   48339 type.go:168] "Request Body" body=""
	I1212 00:17:41.286483   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:41.286793   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:41.786385   48339 type.go:168] "Request Body" body=""
	I1212 00:17:41.786504   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:41.786782   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:42.286235   48339 type.go:168] "Request Body" body=""
	I1212 00:17:42.286314   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:42.286656   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:42.786517   48339 type.go:168] "Request Body" body=""
	I1212 00:17:42.786601   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:42.786955   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:42.787030   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:43.286740   48339 type.go:168] "Request Body" body=""
	I1212 00:17:43.286811   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:43.287101   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:43.786897   48339 type.go:168] "Request Body" body=""
	I1212 00:17:43.786970   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:43.787283   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:44.287080   48339 type.go:168] "Request Body" body=""
	I1212 00:17:44.287151   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:44.287449   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:44.786108   48339 type.go:168] "Request Body" body=""
	I1212 00:17:44.786188   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:44.786505   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:45.286236   48339 type.go:168] "Request Body" body=""
	I1212 00:17:45.286337   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:45.286642   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:45.286697   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:45.786185   48339 type.go:168] "Request Body" body=""
	I1212 00:17:45.786259   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:45.786591   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:46.286218   48339 type.go:168] "Request Body" body=""
	I1212 00:17:46.286326   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:46.286644   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:46.786217   48339 type.go:168] "Request Body" body=""
	I1212 00:17:46.786289   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:46.786616   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:47.286247   48339 type.go:168] "Request Body" body=""
	I1212 00:17:47.286340   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:47.286709   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:47.286768   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:47.786186   48339 type.go:168] "Request Body" body=""
	I1212 00:17:47.786263   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:47.786590   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:48.286576   48339 type.go:168] "Request Body" body=""
	I1212 00:17:48.286657   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:48.287040   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:48.786801   48339 type.go:168] "Request Body" body=""
	I1212 00:17:48.786875   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:48.787271   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:49.287049   48339 type.go:168] "Request Body" body=""
	I1212 00:17:49.287121   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:49.287376   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:49.287415   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:49.786455   48339 type.go:168] "Request Body" body=""
	I1212 00:17:49.786542   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:49.786946   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:50.286236   48339 type.go:168] "Request Body" body=""
	I1212 00:17:50.286337   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:50.286768   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:50.787077   48339 type.go:168] "Request Body" body=""
	I1212 00:17:50.787161   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:50.787441   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:51.286171   48339 type.go:168] "Request Body" body=""
	I1212 00:17:51.286244   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:51.286582   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:51.786669   48339 type.go:168] "Request Body" body=""
	I1212 00:17:51.786740   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:51.787072   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:51.787128   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:52.286721   48339 type.go:168] "Request Body" body=""
	I1212 00:17:52.286792   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:52.287074   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:52.787058   48339 type.go:168] "Request Body" body=""
	I1212 00:17:52.787135   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:52.787466   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:53.286395   48339 type.go:168] "Request Body" body=""
	I1212 00:17:53.286475   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:53.286789   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:53.786172   48339 type.go:168] "Request Body" body=""
	I1212 00:17:53.786242   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:53.786578   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:54.286214   48339 type.go:168] "Request Body" body=""
	I1212 00:17:54.286284   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:54.286634   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:54.286688   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:54.786336   48339 type.go:168] "Request Body" body=""
	I1212 00:17:54.786415   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:54.786747   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:55.287099   48339 type.go:168] "Request Body" body=""
	I1212 00:17:55.287165   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:55.287421   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:55.787184   48339 type.go:168] "Request Body" body=""
	I1212 00:17:55.787260   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:55.787579   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:56.286208   48339 type.go:168] "Request Body" body=""
	I1212 00:17:56.286283   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:56.286616   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:56.786874   48339 type.go:168] "Request Body" body=""
	I1212 00:17:56.786946   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:56.787207   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:56.787260   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:57.286785   48339 type.go:168] "Request Body" body=""
	I1212 00:17:57.286872   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:57.287249   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:57.786907   48339 type.go:168] "Request Body" body=""
	I1212 00:17:57.786979   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:57.787325   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:58.287084   48339 type.go:168] "Request Body" body=""
	I1212 00:17:58.287156   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:58.287408   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:58.787171   48339 type.go:168] "Request Body" body=""
	I1212 00:17:58.787247   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:58.787569   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:58.787624   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:59.286220   48339 type.go:168] "Request Body" body=""
	I1212 00:17:59.286295   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:59.286637   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:59.786160   48339 type.go:168] "Request Body" body=""
	I1212 00:17:59.786226   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:59.786481   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:00.286342   48339 type.go:168] "Request Body" body=""
	I1212 00:18:00.286424   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:00.286745   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:00.786411   48339 type.go:168] "Request Body" body=""
	I1212 00:18:00.786487   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:00.786799   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:01.286485   48339 type.go:168] "Request Body" body=""
	I1212 00:18:01.286554   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:01.286822   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:01.286864   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:01.786179   48339 type.go:168] "Request Body" body=""
	I1212 00:18:01.786249   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:01.786559   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:02.286272   48339 type.go:168] "Request Body" body=""
	I1212 00:18:02.286352   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:02.286681   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:02.786397   48339 type.go:168] "Request Body" body=""
	I1212 00:18:02.786473   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:02.786729   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:03.286684   48339 type.go:168] "Request Body" body=""
	I1212 00:18:03.286756   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:03.287062   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:03.287108   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:03.786757   48339 type.go:168] "Request Body" body=""
	I1212 00:18:03.786848   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:03.787220   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:04.286890   48339 type.go:168] "Request Body" body=""
	I1212 00:18:04.286971   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:04.287276   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:04.787022   48339 type.go:168] "Request Body" body=""
	I1212 00:18:04.787101   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:04.787413   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:05.286164   48339 type.go:168] "Request Body" body=""
	I1212 00:18:05.286245   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:05.286589   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:05.786893   48339 type.go:168] "Request Body" body=""
	I1212 00:18:05.786965   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:05.787232   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:05.787272   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:06.287096   48339 type.go:168] "Request Body" body=""
	I1212 00:18:06.287189   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:06.287596   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:06.786293   48339 type.go:168] "Request Body" body=""
	I1212 00:18:06.786366   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:06.786687   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:07.286878   48339 type.go:168] "Request Body" body=""
	I1212 00:18:07.286943   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:07.287205   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:07.786915   48339 type.go:168] "Request Body" body=""
	I1212 00:18:07.786985   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:07.787328   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:07.787380   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:08.286823   48339 type.go:168] "Request Body" body=""
	I1212 00:18:08.286912   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:08.287273   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:08.786885   48339 type.go:168] "Request Body" body=""
	I1212 00:18:08.786957   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:08.787238   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:09.286257   48339 type.go:168] "Request Body" body=""
	I1212 00:18:09.286349   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:09.286656   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:09.786355   48339 type.go:168] "Request Body" body=""
	I1212 00:18:09.786440   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:09.786773   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:10.286467   48339 type.go:168] "Request Body" body=""
	I1212 00:18:10.286571   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:10.286828   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:10.286869   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:10.786201   48339 type.go:168] "Request Body" body=""
	I1212 00:18:10.786280   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:10.786615   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:11.286318   48339 type.go:168] "Request Body" body=""
	I1212 00:18:11.286395   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:11.286719   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:11.786408   48339 type.go:168] "Request Body" body=""
	I1212 00:18:11.786479   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:11.786752   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:12.286228   48339 type.go:168] "Request Body" body=""
	I1212 00:18:12.286305   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:12.286693   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:12.786445   48339 type.go:168] "Request Body" body=""
	I1212 00:18:12.786529   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:12.786847   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:12.786901   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:13.286863   48339 type.go:168] "Request Body" body=""
	I1212 00:18:13.286936   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:13.287242   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:13.786941   48339 type.go:168] "Request Body" body=""
	I1212 00:18:13.787040   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:13.787410   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:14.287037   48339 type.go:168] "Request Body" body=""
	I1212 00:18:14.287114   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:14.287432   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:14.786139   48339 type.go:168] "Request Body" body=""
	I1212 00:18:14.786211   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:14.786471   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:15.286165   48339 type.go:168] "Request Body" body=""
	I1212 00:18:15.286243   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:15.286559   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:15.286619   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:15.786275   48339 type.go:168] "Request Body" body=""
	I1212 00:18:15.786355   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:15.786707   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:16.286358   48339 type.go:168] "Request Body" body=""
	I1212 00:18:16.286435   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:16.286754   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:16.786213   48339 type.go:168] "Request Body" body=""
	I1212 00:18:16.786285   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:16.786626   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:17.286235   48339 type.go:168] "Request Body" body=""
	I1212 00:18:17.286316   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:17.286711   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:17.286765   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:17.786210   48339 type.go:168] "Request Body" body=""
	I1212 00:18:17.786299   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:17.786594   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:18.286667   48339 type.go:168] "Request Body" body=""
	I1212 00:18:18.286745   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:18.287093   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:18.786871   48339 type.go:168] "Request Body" body=""
	I1212 00:18:18.786957   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:18.787347   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:19.287118   48339 type.go:168] "Request Body" body=""
	I1212 00:18:19.287189   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:19.287538   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:19.287598   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:19.786173   48339 type.go:168] "Request Body" body=""
	I1212 00:18:19.786250   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:19.786591   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:20.286290   48339 type.go:168] "Request Body" body=""
	I1212 00:18:20.286368   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:20.286732   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:20.786421   48339 type.go:168] "Request Body" body=""
	I1212 00:18:20.786496   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:20.786769   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:21.286229   48339 type.go:168] "Request Body" body=""
	I1212 00:18:21.286299   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:21.286631   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:21.786238   48339 type.go:168] "Request Body" body=""
	I1212 00:18:21.786325   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:21.786704   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:21.786756   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:22.286201   48339 type.go:168] "Request Body" body=""
	I1212 00:18:22.286267   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:22.286513   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:22.786439   48339 type.go:168] "Request Body" body=""
	I1212 00:18:22.786511   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:22.786820   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:23.286747   48339 type.go:168] "Request Body" body=""
	I1212 00:18:23.286828   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:23.287136   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:23.786886   48339 type.go:168] "Request Body" body=""
	I1212 00:18:23.786958   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:23.787219   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:23.787272   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:24.287069   48339 type.go:168] "Request Body" body=""
	I1212 00:18:24.287145   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:24.287464   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:24.787101   48339 type.go:168] "Request Body" body=""
	I1212 00:18:24.787205   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:24.787503   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:25.286157   48339 type.go:168] "Request Body" body=""
	I1212 00:18:25.286231   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:25.286484   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:25.786179   48339 type.go:168] "Request Body" body=""
	I1212 00:18:25.786250   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:25.786581   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:26.286254   48339 type.go:168] "Request Body" body=""
	I1212 00:18:26.286329   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:26.286638   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:26.286693   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:26.786132   48339 type.go:168] "Request Body" body=""
	I1212 00:18:26.786199   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:26.786452   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:27.286167   48339 type.go:168] "Request Body" body=""
	I1212 00:18:27.286240   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:27.286520   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:27.786148   48339 type.go:168] "Request Body" body=""
	I1212 00:18:27.786250   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:27.786565   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:28.286477   48339 type.go:168] "Request Body" body=""
	I1212 00:18:28.286544   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:28.286801   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:28.286842   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:28.786195   48339 type.go:168] "Request Body" body=""
	I1212 00:18:28.786269   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:28.786563   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:29.286151   48339 type.go:168] "Request Body" body=""
	I1212 00:18:29.286228   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:29.286541   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:29.786783   48339 type.go:168] "Request Body" body=""
	I1212 00:18:29.786859   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:29.787122   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:30.286875   48339 type.go:168] "Request Body" body=""
	I1212 00:18:30.286953   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:30.287291   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:30.287342   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:30.786921   48339 type.go:168] "Request Body" body=""
	I1212 00:18:30.787054   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:30.787386   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:31.287040   48339 type.go:168] "Request Body" body=""
	I1212 00:18:31.287113   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:31.287420   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:31.786111   48339 type.go:168] "Request Body" body=""
	I1212 00:18:31.786190   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:31.786534   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:32.286242   48339 type.go:168] "Request Body" body=""
	I1212 00:18:32.286317   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:32.286644   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:32.787099   48339 type.go:168] "Request Body" body=""
	I1212 00:18:32.787169   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:32.787444   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:32.787485   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:33.286455   48339 type.go:168] "Request Body" body=""
	I1212 00:18:33.286531   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:33.286867   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:33.786185   48339 type.go:168] "Request Body" body=""
	I1212 00:18:33.786263   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:33.786599   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:34.287030   48339 type.go:168] "Request Body" body=""
	I1212 00:18:34.287101   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:34.287356   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:34.787107   48339 type.go:168] "Request Body" body=""
	I1212 00:18:34.787178   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:34.787462   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:34.787506   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:35.287151   48339 type.go:168] "Request Body" body=""
	I1212 00:18:35.287227   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:35.287561   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:35.786156   48339 type.go:168] "Request Body" body=""
	I1212 00:18:35.786227   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:35.786476   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:36.286225   48339 type.go:168] "Request Body" body=""
	I1212 00:18:36.286302   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:36.286658   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:36.786364   48339 type.go:168] "Request Body" body=""
	I1212 00:18:36.786441   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:36.786776   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:37.287081   48339 type.go:168] "Request Body" body=""
	I1212 00:18:37.287160   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:37.287429   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:37.287479   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:37.786116   48339 type.go:168] "Request Body" body=""
	I1212 00:18:37.786189   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:37.786493   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:38.286438   48339 type.go:168] "Request Body" body=""
	I1212 00:18:38.286517   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:38.286835   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:38.786180   48339 type.go:168] "Request Body" body=""
	I1212 00:18:38.786274   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:38.786571   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:39.286206   48339 type.go:168] "Request Body" body=""
	I1212 00:18:39.286282   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:39.286612   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:39.786192   48339 type.go:168] "Request Body" body=""
	I1212 00:18:39.786279   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:39.786630   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:39.786682   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:40.286354   48339 type.go:168] "Request Body" body=""
	I1212 00:18:40.286444   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:40.286835   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:40.786204   48339 type.go:168] "Request Body" body=""
	I1212 00:18:40.786287   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:40.786601   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:41.286231   48339 type.go:168] "Request Body" body=""
	I1212 00:18:41.286307   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:41.286653   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:41.786899   48339 type.go:168] "Request Body" body=""
	I1212 00:18:41.787023   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:41.787291   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:41.787331   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:42.287103   48339 type.go:168] "Request Body" body=""
	I1212 00:18:42.287183   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:42.287534   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:42.786413   48339 type.go:168] "Request Body" body=""
	I1212 00:18:42.786496   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:42.786838   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:43.286712   48339 type.go:168] "Request Body" body=""
	I1212 00:18:43.286788   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:43.287076   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:43.786845   48339 type.go:168] "Request Body" body=""
	I1212 00:18:43.786921   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:43.787255   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:44.287058   48339 type.go:168] "Request Body" body=""
	I1212 00:18:44.287145   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:44.287474   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:44.287531   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:44.786152   48339 type.go:168] "Request Body" body=""
	I1212 00:18:44.786226   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:44.786558   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:45.286226   48339 type.go:168] "Request Body" body=""
	I1212 00:18:45.286300   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:45.286609   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:45.786194   48339 type.go:168] "Request Body" body=""
	I1212 00:18:45.786265   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:45.786613   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:46.287075   48339 type.go:168] "Request Body" body=""
	I1212 00:18:46.287143   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:46.287427   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:46.786109   48339 type.go:168] "Request Body" body=""
	I1212 00:18:46.786181   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:46.786497   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:46.786555   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:47.286231   48339 type.go:168] "Request Body" body=""
	I1212 00:18:47.286325   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:47.286637   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:47.786326   48339 type.go:168] "Request Body" body=""
	I1212 00:18:47.786398   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:47.786701   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:48.286663   48339 type.go:168] "Request Body" body=""
	I1212 00:18:48.286736   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:48.287070   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:48.786872   48339 type.go:168] "Request Body" body=""
	I1212 00:18:48.786951   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:48.787298   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:48.787351   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:49.287060   48339 type.go:168] "Request Body" body=""
	I1212 00:18:49.287138   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:49.287405   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:49.786097   48339 type.go:168] "Request Body" body=""
	I1212 00:18:49.786175   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:49.786470   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:50.286223   48339 type.go:168] "Request Body" body=""
	I1212 00:18:50.286298   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:50.286634   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:50.786914   48339 type.go:168] "Request Body" body=""
	I1212 00:18:50.786986   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:50.787320   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:50.787380   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:51.287127   48339 type.go:168] "Request Body" body=""
	I1212 00:18:51.287204   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:51.287530   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:51.786096   48339 type.go:168] "Request Body" body=""
	I1212 00:18:51.786170   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:51.786513   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:52.286949   48339 type.go:168] "Request Body" body=""
	I1212 00:18:52.287031   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:52.287290   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:52.786331   48339 type.go:168] "Request Body" body=""
	I1212 00:18:52.786411   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:52.786755   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:53.286620   48339 type.go:168] "Request Body" body=""
	I1212 00:18:53.286694   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:53.287034   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:53.287095   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:53.786805   48339 type.go:168] "Request Body" body=""
	I1212 00:18:53.786875   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:53.787154   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:54.286914   48339 type.go:168] "Request Body" body=""
	I1212 00:18:54.286986   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:54.287311   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:54.787067   48339 type.go:168] "Request Body" body=""
	I1212 00:18:54.787140   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:54.787481   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:55.287095   48339 type.go:168] "Request Body" body=""
	I1212 00:18:55.287162   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:55.287415   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:55.287454   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:55.786091   48339 type.go:168] "Request Body" body=""
	I1212 00:18:55.786159   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:55.786468   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:56.286160   48339 type.go:168] "Request Body" body=""
	I1212 00:18:56.286232   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:56.286551   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:56.786800   48339 type.go:168] "Request Body" body=""
	I1212 00:18:56.786866   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:56.787137   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:57.286892   48339 type.go:168] "Request Body" body=""
	I1212 00:18:57.286971   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:57.287328   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:57.787142   48339 type.go:168] "Request Body" body=""
	I1212 00:18:57.787233   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:57.787583   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:57.787634   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:58.286388   48339 type.go:168] "Request Body" body=""
	I1212 00:18:58.286461   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:58.286718   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:58.786377   48339 type.go:168] "Request Body" body=""
	I1212 00:18:58.786448   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:58.786805   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:59.286505   48339 type.go:168] "Request Body" body=""
	I1212 00:18:59.286587   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:59.286890   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:59.786144   48339 type.go:168] "Request Body" body=""
	I1212 00:18:59.786225   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:59.786592   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:00.286264   48339 type.go:168] "Request Body" body=""
	I1212 00:19:00.286345   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:00.286656   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:00.286735   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:00.786383   48339 type.go:168] "Request Body" body=""
	I1212 00:19:00.786458   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:00.786791   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:01.286179   48339 type.go:168] "Request Body" body=""
	I1212 00:19:01.286250   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:01.286584   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:01.786253   48339 type.go:168] "Request Body" body=""
	I1212 00:19:01.786329   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:01.786671   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:02.286241   48339 type.go:168] "Request Body" body=""
	I1212 00:19:02.286317   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:02.286672   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:02.786408   48339 type.go:168] "Request Body" body=""
	I1212 00:19:02.786476   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:02.786723   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:02.786763   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:03.286700   48339 type.go:168] "Request Body" body=""
	I1212 00:19:03.286795   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:03.287188   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:03.787019   48339 type.go:168] "Request Body" body=""
	I1212 00:19:03.787097   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:03.787433   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:04.286096   48339 type.go:168] "Request Body" body=""
	I1212 00:19:04.286175   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:04.286490   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:04.786196   48339 type.go:168] "Request Body" body=""
	I1212 00:19:04.786274   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:04.786601   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:05.286296   48339 type.go:168] "Request Body" body=""
	I1212 00:19:05.286371   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:05.286696   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:05.286753   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:05.786175   48339 type.go:168] "Request Body" body=""
	I1212 00:19:05.786254   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:05.786601   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:06.286229   48339 type.go:168] "Request Body" body=""
	I1212 00:19:06.286306   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:06.286600   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:06.786191   48339 type.go:168] "Request Body" body=""
	I1212 00:19:06.786264   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:06.786580   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:07.286119   48339 type.go:168] "Request Body" body=""
	I1212 00:19:07.286199   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:07.286473   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:07.786186   48339 type.go:168] "Request Body" body=""
	I1212 00:19:07.786259   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:07.786536   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:07.786581   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:08.286383   48339 type.go:168] "Request Body" body=""
	I1212 00:19:08.286463   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:08.286917   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:08.786174   48339 type.go:168] "Request Body" body=""
	I1212 00:19:08.786248   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:08.786551   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:09.286220   48339 type.go:168] "Request Body" body=""
	I1212 00:19:09.286299   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:09.286639   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:09.786169   48339 type.go:168] "Request Body" body=""
	I1212 00:19:09.786240   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:09.786540   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:10.286846   48339 type.go:168] "Request Body" body=""
	I1212 00:19:10.286915   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:10.287189   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:10.287228   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:10.787020   48339 type.go:168] "Request Body" body=""
	I1212 00:19:10.787096   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:10.787416   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:11.286118   48339 type.go:168] "Request Body" body=""
	I1212 00:19:11.286193   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:11.286517   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:11.786150   48339 type.go:168] "Request Body" body=""
	I1212 00:19:11.786231   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:11.786516   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:12.286197   48339 type.go:168] "Request Body" body=""
	I1212 00:19:12.286271   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:12.286598   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:12.786359   48339 type.go:168] "Request Body" body=""
	I1212 00:19:12.786434   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:12.786739   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:12.786787   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:13.286561   48339 type.go:168] "Request Body" body=""
	I1212 00:19:13.286637   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:13.286885   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:13.786215   48339 type.go:168] "Request Body" body=""
	I1212 00:19:13.786291   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:13.786637   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:14.286215   48339 type.go:168] "Request Body" body=""
	I1212 00:19:14.286287   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:14.286589   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:14.786851   48339 type.go:168] "Request Body" body=""
	I1212 00:19:14.786918   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:14.787262   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:14.787320   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:15.287090   48339 type.go:168] "Request Body" body=""
	I1212 00:19:15.287165   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:15.287490   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:15.786172   48339 type.go:168] "Request Body" body=""
	I1212 00:19:15.786269   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:15.786579   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:16.286136   48339 type.go:168] "Request Body" body=""
	I1212 00:19:16.286210   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:16.286453   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:16.786222   48339 type.go:168] "Request Body" body=""
	I1212 00:19:16.786299   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:16.786659   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:17.286375   48339 type.go:168] "Request Body" body=""
	I1212 00:19:17.286453   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:17.286795   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:17.286857   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:17.786163   48339 type.go:168] "Request Body" body=""
	I1212 00:19:17.786237   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:17.786560   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:18.286451   48339 type.go:168] "Request Body" body=""
	I1212 00:19:18.286531   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:18.286856   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:18.786177   48339 type.go:168] "Request Body" body=""
	I1212 00:19:18.786251   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:18.786557   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:19.286160   48339 type.go:168] "Request Body" body=""
	I1212 00:19:19.286232   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:19.286485   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:19.786192   48339 type.go:168] "Request Body" body=""
	I1212 00:19:19.786263   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:19.786567   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:19.786614   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:20.286287   48339 type.go:168] "Request Body" body=""
	I1212 00:19:20.286370   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:20.286718   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:20.787029   48339 type.go:168] "Request Body" body=""
	I1212 00:19:20.787097   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:20.787342   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:21.287119   48339 type.go:168] "Request Body" body=""
	I1212 00:19:21.287198   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:21.287505   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:21.786177   48339 type.go:168] "Request Body" body=""
	I1212 00:19:21.786266   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:21.786579   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:22.287046   48339 type.go:168] "Request Body" body=""
	I1212 00:19:22.287111   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:22.287377   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:22.287420   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:22.786275   48339 type.go:168] "Request Body" body=""
	I1212 00:19:22.786343   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:22.786646   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:23.286598   48339 type.go:168] "Request Body" body=""
	I1212 00:19:23.286692   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:23.287042   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:23.786834   48339 type.go:168] "Request Body" body=""
	I1212 00:19:23.786913   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:23.787199   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:24.286916   48339 type.go:168] "Request Body" body=""
	I1212 00:19:24.287018   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:24.287331   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:24.787102   48339 type.go:168] "Request Body" body=""
	I1212 00:19:24.787174   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:24.787525   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:24.787578   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:25.286170   48339 type.go:168] "Request Body" body=""
	I1212 00:19:25.286246   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:25.286510   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:25.787014   48339 type.go:168] "Request Body" body=""
	I1212 00:19:25.787086   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:25.787411   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:26.286133   48339 type.go:168] "Request Body" body=""
	I1212 00:19:26.286205   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:26.286533   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:26.786125   48339 type.go:168] "Request Body" body=""
	I1212 00:19:26.786195   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:26.786499   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:27.286222   48339 type.go:168] "Request Body" body=""
	I1212 00:19:27.286294   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:27.286619   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:27.286677   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:27.786180   48339 type.go:168] "Request Body" body=""
	I1212 00:19:27.786252   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:27.786586   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:28.286372   48339 type.go:168] "Request Body" body=""
	I1212 00:19:28.286448   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:28.286700   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:28.786195   48339 type.go:168] "Request Body" body=""
	I1212 00:19:28.786271   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:28.786605   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:29.286191   48339 type.go:168] "Request Body" body=""
	I1212 00:19:29.286267   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:29.286615   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:29.786910   48339 type.go:168] "Request Body" body=""
	I1212 00:19:29.786981   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:29.787247   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:29.787287   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:30.287073   48339 type.go:168] "Request Body" body=""
	I1212 00:19:30.287154   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:30.287499   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:30.786184   48339 type.go:168] "Request Body" body=""
	I1212 00:19:30.786259   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:30.786602   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:31.286871   48339 type.go:168] "Request Body" body=""
	I1212 00:19:31.286942   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:31.287207   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:31.786942   48339 type.go:168] "Request Body" body=""
	I1212 00:19:31.787038   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:31.787334   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:31.787377   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:32.287019   48339 type.go:168] "Request Body" body=""
	I1212 00:19:32.287094   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:32.287431   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:32.786241   48339 type.go:168] "Request Body" body=""
	I1212 00:19:32.786308   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:32.786562   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:33.286586   48339 type.go:168] "Request Body" body=""
	I1212 00:19:33.286669   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:33.287081   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:33.786842   48339 type.go:168] "Request Body" body=""
	I1212 00:19:33.786915   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:33.787232   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:34.286965   48339 type.go:168] "Request Body" body=""
	I1212 00:19:34.287052   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:34.287321   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:34.287371   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:34.787103   48339 type.go:168] "Request Body" body=""
	I1212 00:19:34.787184   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:34.787507   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:35.286199   48339 type.go:168] "Request Body" body=""
	I1212 00:19:35.286275   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:35.286623   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:35.786303   48339 type.go:168] "Request Body" body=""
	I1212 00:19:35.786378   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:35.786633   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:36.286201   48339 type.go:168] "Request Body" body=""
	I1212 00:19:36.286276   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:36.286623   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:36.786173   48339 type.go:168] "Request Body" body=""
	I1212 00:19:36.786245   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:36.786551   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:36.786609   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:37.286157   48339 type.go:168] "Request Body" body=""
	I1212 00:19:37.286229   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:37.286482   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:37.786158   48339 type.go:168] "Request Body" body=""
	I1212 00:19:37.786237   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:37.786552   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:38.286494   48339 type.go:168] "Request Body" body=""
	I1212 00:19:38.286574   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:38.286901   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:38.786471   48339 type.go:168] "Request Body" body=""
	I1212 00:19:38.786543   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:38.786828   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:38.786871   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:39.286234   48339 type.go:168] "Request Body" body=""
	I1212 00:19:39.286306   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:39.286633   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:39.786191   48339 type.go:168] "Request Body" body=""
	I1212 00:19:39.786273   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:39.786591   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:40.286174   48339 type.go:168] "Request Body" body=""
	I1212 00:19:40.286246   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:40.286501   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:40.786212   48339 type.go:168] "Request Body" body=""
	I1212 00:19:40.786284   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:40.786618   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:41.286308   48339 type.go:168] "Request Body" body=""
	I1212 00:19:41.286385   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:41.286717   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:41.286778   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:41.786259   48339 type.go:168] "Request Body" body=""
	I1212 00:19:41.786336   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:41.786616   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:42.286266   48339 type.go:168] "Request Body" body=""
	I1212 00:19:42.286426   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:42.286836   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:42.786557   48339 type.go:168] "Request Body" body=""
	I1212 00:19:42.786636   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:42.786968   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:43.286831   48339 type.go:168] "Request Body" body=""
	I1212 00:19:43.286907   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:43.287195   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:43.287247   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:43.786980   48339 type.go:168] "Request Body" body=""
	I1212 00:19:43.787071   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:43.787383   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:44.286097   48339 type.go:168] "Request Body" body=""
	I1212 00:19:44.286182   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:44.286516   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:44.787095   48339 type.go:168] "Request Body" body=""
	I1212 00:19:44.787170   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:44.787420   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:45.286197   48339 type.go:168] "Request Body" body=""
	I1212 00:19:45.286315   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:45.286686   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:45.786212   48339 type.go:168] "Request Body" body=""
	I1212 00:19:45.786292   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:45.786616   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:45.786667   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:46.286316   48339 type.go:168] "Request Body" body=""
	I1212 00:19:46.286391   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:46.286672   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:46.786182   48339 type.go:168] "Request Body" body=""
	I1212 00:19:46.786255   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:46.786571   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:47.286220   48339 type.go:168] "Request Body" body=""
	I1212 00:19:47.286293   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:47.286639   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:47.787075   48339 type.go:168] "Request Body" body=""
	I1212 00:19:47.787141   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:47.787388   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:47.787425   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:48.286333   48339 type.go:168] "Request Body" body=""
	I1212 00:19:48.286406   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:48.286742   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:48.786260   48339 type.go:168] "Request Body" body=""
	I1212 00:19:48.786335   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:48.786670   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:49.286373   48339 type.go:168] "Request Body" body=""
	I1212 00:19:49.286448   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:49.286721   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:49.786393   48339 type.go:168] "Request Body" body=""
	I1212 00:19:49.786466   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:49.786793   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:50.286556   48339 type.go:168] "Request Body" body=""
	I1212 00:19:50.286645   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:50.286977   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:50.287046   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:50.786244   48339 type.go:168] "Request Body" body=""
	I1212 00:19:50.786323   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:50.786639   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:51.286207   48339 type.go:168] "Request Body" body=""
	I1212 00:19:51.286281   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:51.286646   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:51.786236   48339 type.go:168] "Request Body" body=""
	I1212 00:19:51.786326   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:51.786698   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:52.286383   48339 type.go:168] "Request Body" body=""
	I1212 00:19:52.286453   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:52.286705   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:52.786430   48339 type.go:168] "Request Body" body=""
	I1212 00:19:52.786502   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:52.786808   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:52.786864   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.286726   48339 type.go:168] "Request Body" body=""
	I1212 00:19:53.286799   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:53.287127   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:53.786892   48339 type.go:168] "Request Body" body=""
	I1212 00:19:53.786963   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:53.787281   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:54.287089   48339 type.go:168] "Request Body" body=""
	I1212 00:19:54.287161   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:54.287510   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:54.787071   48339 type.go:168] "Request Body" body=""
	I1212 00:19:54.787148   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:54.787473   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:54.787523   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:55.287042   48339 type.go:168] "Request Body" body=""
	I1212 00:19:55.287120   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:55.287397   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:55.786097   48339 type.go:168] "Request Body" body=""
	I1212 00:19:55.786167   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:55.786471   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:56.286180   48339 type.go:168] "Request Body" body=""
	I1212 00:19:56.286255   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:56.286560   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:56.786731   48339 type.go:168] "Request Body" body=""
	I1212 00:19:56.786834   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:56.787097   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:57.286916   48339 type.go:168] "Request Body" body=""
	I1212 00:19:57.287011   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:57.287338   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:57.287392   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:57.787113   48339 type.go:168] "Request Body" body=""
	I1212 00:19:57.787195   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:57.787542   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:58.286383   48339 type.go:168] "Request Body" body=""
	I1212 00:19:58.286455   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:58.286708   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:58.786168   48339 type.go:168] "Request Body" body=""
	I1212 00:19:58.786245   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:58.786576   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:59.286179   48339 type.go:168] "Request Body" body=""
	I1212 00:19:59.286256   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:59.286592   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:59.786275   48339 type.go:168] "Request Body" body=""
	I1212 00:19:59.786344   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:59.786595   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:59.786633   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:00.286342   48339 type.go:168] "Request Body" body=""
	I1212 00:20:00.286436   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:00.286738   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:00.786599   48339 type.go:168] "Request Body" body=""
	I1212 00:20:00.786680   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:00.787175   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:01.286983   48339 type.go:168] "Request Body" body=""
	I1212 00:20:01.287070   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:01.287375   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:01.787109   48339 type.go:168] "Request Body" body=""
	I1212 00:20:01.787182   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:01.787524   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:01.787578   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:02.286136   48339 type.go:168] "Request Body" body=""
	I1212 00:20:02.286214   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:02.286541   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:02.786447   48339 type.go:168] "Request Body" body=""
	I1212 00:20:02.786522   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:02.786791   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:03.286727   48339 type.go:168] "Request Body" body=""
	I1212 00:20:03.286808   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:03.287147   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:03.786954   48339 type.go:168] "Request Body" body=""
	I1212 00:20:03.787051   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:03.787411   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:04.287105   48339 type.go:168] "Request Body" body=""
	I1212 00:20:04.287184   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:04.287440   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:04.287480   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:04.786199   48339 type.go:168] "Request Body" body=""
	I1212 00:20:04.786275   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:04.786621   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:05.286300   48339 type.go:168] "Request Body" body=""
	I1212 00:20:05.286378   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:05.286699   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:05.786189   48339 type.go:168] "Request Body" body=""
	I1212 00:20:05.786286   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:05.786574   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:06.286216   48339 type.go:168] "Request Body" body=""
	I1212 00:20:06.286291   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:06.286653   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:06.786351   48339 type.go:168] "Request Body" body=""
	I1212 00:20:06.786425   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:06.786777   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:06.786833   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:07.286483   48339 type.go:168] "Request Body" body=""
	I1212 00:20:07.286562   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:07.286815   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:07.786485   48339 type.go:168] "Request Body" body=""
	I1212 00:20:07.786559   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:07.786920   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:08.286761   48339 type.go:168] "Request Body" body=""
	I1212 00:20:08.286836   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:08.287188   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:08.786941   48339 type.go:168] "Request Body" body=""
	I1212 00:20:08.787029   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:08.787324   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:08.787386   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:09.287127   48339 type.go:168] "Request Body" body=""
	I1212 00:20:09.287201   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:09.287579   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:09.786165   48339 type.go:168] "Request Body" body=""
	I1212 00:20:09.786264   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:09.786669   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:10.286348   48339 type.go:168] "Request Body" body=""
	I1212 00:20:10.286420   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:10.286711   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:10.786398   48339 type.go:168] "Request Body" body=""
	I1212 00:20:10.786476   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:10.786785   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:11.286171   48339 type.go:168] "Request Body" body=""
	I1212 00:20:11.286251   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:11.286562   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:11.286616   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:11.786160   48339 type.go:168] "Request Body" body=""
	I1212 00:20:11.786237   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:11.786560   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:12.286232   48339 type.go:168] "Request Body" body=""
	I1212 00:20:12.286313   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:12.286631   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:12.786525   48339 type.go:168] "Request Body" body=""
	I1212 00:20:12.786596   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:12.786927   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:13.286693   48339 type.go:168] "Request Body" body=""
	I1212 00:20:13.286759   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:13.287036   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:13.287076   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:13.786823   48339 type.go:168] "Request Body" body=""
	I1212 00:20:13.786903   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:13.787250   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:14.287106   48339 type.go:168] "Request Body" body=""
	I1212 00:20:14.287193   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:14.287515   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:14.786201   48339 type.go:168] "Request Body" body=""
	I1212 00:20:14.786277   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:14.786598   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:15.286226   48339 type.go:168] "Request Body" body=""
	I1212 00:20:15.286303   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:15.286675   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:15.786377   48339 type.go:168] "Request Body" body=""
	I1212 00:20:15.786454   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:15.786771   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:15.786825   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:16.286146   48339 type.go:168] "Request Body" body=""
	I1212 00:20:16.286230   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:16.286475   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:16.786183   48339 type.go:168] "Request Body" body=""
	I1212 00:20:16.786256   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:16.786581   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:17.286264   48339 type.go:168] "Request Body" body=""
	I1212 00:20:17.286366   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:17.286686   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:17.786376   48339 type.go:168] "Request Body" body=""
	I1212 00:20:17.786459   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:17.786714   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:18.286808   48339 type.go:168] "Request Body" body=""
	I1212 00:20:18.286881   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:18.287211   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:18.287257   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:18.787025   48339 type.go:168] "Request Body" body=""
	I1212 00:20:18.787098   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:18.787407   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:19.286245   48339 type.go:168] "Request Body" body=""
	I1212 00:20:19.286455   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:19.287173   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:19.786138   48339 type.go:168] "Request Body" body=""
	I1212 00:20:19.786225   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:19.786578   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:20.286250   48339 type.go:168] "Request Body" body=""
	I1212 00:20:20.286325   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:20.286634   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:20.786202   48339 type.go:168] "Request Body" body=""
	I1212 00:20:20.786280   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:20.786538   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:20.786583   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:21.286166   48339 type.go:168] "Request Body" body=""
	I1212 00:20:21.286242   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:21.286538   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:21.786130   48339 type.go:168] "Request Body" body=""
	I1212 00:20:21.786205   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:21.786517   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:22.287092   48339 type.go:168] "Request Body" body=""
	I1212 00:20:22.287164   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:22.287411   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:22.786375   48339 type.go:168] "Request Body" body=""
	I1212 00:20:22.786456   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:22.786778   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:22.786829   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:23.286658   48339 type.go:168] "Request Body" body=""
	I1212 00:20:23.286731   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:23.287085   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:23.786836   48339 type.go:168] "Request Body" body=""
	I1212 00:20:23.786908   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:23.787187   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:24.286964   48339 type.go:168] "Request Body" body=""
	I1212 00:20:24.287062   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:24.287428   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:24.786115   48339 type.go:168] "Request Body" body=""
	I1212 00:20:24.786188   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:24.786524   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:25.286237   48339 type.go:168] "Request Body" body=""
	I1212 00:20:25.286345   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:25.286768   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:25.286850   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:25.786511   48339 type.go:168] "Request Body" body=""
	I1212 00:20:25.786607   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:25.786978   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:26.286790   48339 type.go:168] "Request Body" body=""
	I1212 00:20:26.286875   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:26.287221   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:26.786820   48339 type.go:168] "Request Body" body=""
	I1212 00:20:26.786891   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:26.787243   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:27.287025   48339 type.go:168] "Request Body" body=""
	I1212 00:20:27.287103   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:27.287476   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:27.287533   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:27.786181   48339 type.go:168] "Request Body" body=""
	I1212 00:20:27.786256   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:27.786579   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:28.286328   48339 type.go:168] "Request Body" body=""
	I1212 00:20:28.286403   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:28.286680   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:28.786377   48339 type.go:168] "Request Body" body=""
	I1212 00:20:28.786452   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:28.786763   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:29.286254   48339 type.go:168] "Request Body" body=""
	I1212 00:20:29.286331   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:29.286614   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:29.787081   48339 type.go:168] "Request Body" body=""
	I1212 00:20:29.787157   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:29.787430   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:29.787484   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:30.286195   48339 type.go:168] "Request Body" body=""
	I1212 00:20:30.286367   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:30.286726   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:30.786404   48339 type.go:168] "Request Body" body=""
	I1212 00:20:30.786481   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:30.786819   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:31.286538   48339 type.go:168] "Request Body" body=""
	I1212 00:20:31.286615   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:31.286953   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:31.786734   48339 type.go:168] "Request Body" body=""
	I1212 00:20:31.786823   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:31.787169   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:32.286853   48339 type.go:168] "Request Body" body=""
	I1212 00:20:32.286946   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:32.287277   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:32.287336   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:32.786355   48339 type.go:168] "Request Body" body=""
	I1212 00:20:32.786440   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:32.786710   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:33.286694   48339 type.go:168] "Request Body" body=""
	I1212 00:20:33.286774   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:33.287132   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:33.786905   48339 type.go:168] "Request Body" body=""
	I1212 00:20:33.786983   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:33.787332   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:34.287037   48339 type.go:168] "Request Body" body=""
	I1212 00:20:34.287105   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:34.287355   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:34.287394   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:34.787091   48339 type.go:168] "Request Body" body=""
	I1212 00:20:34.787167   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:34.787475   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:35.286183   48339 type.go:168] "Request Body" body=""
	I1212 00:20:35.286264   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:35.286585   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:35.786156   48339 type.go:168] "Request Body" body=""
	I1212 00:20:35.786225   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:35.786538   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:36.286227   48339 type.go:168] "Request Body" body=""
	I1212 00:20:36.286330   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:36.286653   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:36.786356   48339 type.go:168] "Request Body" body=""
	I1212 00:20:36.786434   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:36.786764   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:36.786818   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:37.286091   48339 type.go:168] "Request Body" body=""
	I1212 00:20:37.286166   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:37.286500   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:37.786184   48339 type.go:168] "Request Body" body=""
	I1212 00:20:37.786263   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:37.786572   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:38.286481   48339 type.go:168] "Request Body" body=""
	I1212 00:20:38.286552   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:38.286881   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:38.786443   48339 type.go:168] "Request Body" body=""
	I1212 00:20:38.786517   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:38.786773   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:39.286212   48339 type.go:168] "Request Body" body=""
	I1212 00:20:39.286290   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:39.286616   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:39.286667   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:39.786165   48339 type.go:168] "Request Body" body=""
	I1212 00:20:39.786242   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:39.786530   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:40.286181   48339 type.go:168] "Request Body" body=""
	I1212 00:20:40.286252   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:40.286503   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:40.786171   48339 type.go:168] "Request Body" body=""
	I1212 00:20:40.786243   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:40.786563   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:41.286127   48339 type.go:168] "Request Body" body=""
	I1212 00:20:41.286208   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:41.286529   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:41.787086   48339 type.go:168] "Request Body" body=""
	I1212 00:20:41.787155   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:41.787421   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:41.787466   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:42.286149   48339 type.go:168] "Request Body" body=""
	I1212 00:20:42.286244   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:42.286590   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:42.786361   48339 type.go:168] "Request Body" body=""
	I1212 00:20:42.786438   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:42.786779   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:43.286633   48339 type.go:168] "Request Body" body=""
	I1212 00:20:43.286702   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:43.286960   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:43.786722   48339 type.go:168] "Request Body" body=""
	I1212 00:20:43.786804   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:43.787206   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:44.286934   48339 type.go:168] "Request Body" body=""
	I1212 00:20:44.287023   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:44.287351   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:44.287409   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:44.786842   48339 type.go:168] "Request Body" body=""
	I1212 00:20:44.786917   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:44.787191   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:45.286977   48339 type.go:168] "Request Body" body=""
	I1212 00:20:45.287067   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:45.287390   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:45.787177   48339 type.go:168] "Request Body" body=""
	I1212 00:20:45.787257   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:45.787616   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:46.286986   48339 type.go:168] "Request Body" body=""
	I1212 00:20:46.287083   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:46.287348   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:46.787132   48339 type.go:168] "Request Body" body=""
	I1212 00:20:46.787205   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:46.787529   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:46.787585   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:47.286216   48339 type.go:168] "Request Body" body=""
	I1212 00:20:47.286289   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:47.286635   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:47.787077   48339 type.go:168] "Request Body" body=""
	I1212 00:20:47.787165   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:47.787464   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:48.286384   48339 type.go:168] "Request Body" body=""
	I1212 00:20:48.286461   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:48.286804   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:48.786191   48339 type.go:168] "Request Body" body=""
	I1212 00:20:48.786264   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:48.786586   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:49.286166   48339 type.go:168] "Request Body" body=""
	I1212 00:20:49.286240   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:49.286495   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:49.286545   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:49.786168   48339 type.go:168] "Request Body" body=""
	I1212 00:20:49.786246   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:49.786526   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:50.286237   48339 type.go:168] "Request Body" body=""
	I1212 00:20:50.286315   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:50.286678   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:50.786121   48339 type.go:168] "Request Body" body=""
	I1212 00:20:50.786187   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:50.786438   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:51.286121   48339 type.go:168] "Request Body" body=""
	I1212 00:20:51.286198   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:51.286527   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:51.286572   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:51.786155   48339 type.go:168] "Request Body" body=""
	I1212 00:20:51.786230   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:51.786579   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:52.286140   48339 type.go:168] "Request Body" body=""
	I1212 00:20:52.286212   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:52.286463   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:52.786341   48339 type.go:168] "Request Body" body=""
	I1212 00:20:52.786421   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:52.786710   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:53.286513   48339 type.go:168] "Request Body" body=""
	I1212 00:20:53.286636   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:53.286976   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:53.287052   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:53.786687   48339 type.go:168] "Request Body" body=""
	I1212 00:20:53.786760   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:53.787036   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:54.286863   48339 type.go:168] "Request Body" body=""
	I1212 00:20:54.286939   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:54.287249   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:54.787061   48339 type.go:168] "Request Body" body=""
	I1212 00:20:54.787143   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:54.787476   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:55.286970   48339 type.go:168] "Request Body" body=""
	I1212 00:20:55.287058   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:55.287308   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:55.287347   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:55.786924   48339 type.go:168] "Request Body" body=""
	I1212 00:20:55.787017   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:55.787330   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:56.287109   48339 type.go:168] "Request Body" body=""
	I1212 00:20:56.287182   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:56.287490   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:56.786897   48339 type.go:168] "Request Body" body=""
	I1212 00:20:56.786972   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:56.787241   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:57.287067   48339 type.go:168] "Request Body" body=""
	I1212 00:20:57.287145   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:57.287509   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:57.287566   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:57.786231   48339 type.go:168] "Request Body" body=""
	I1212 00:20:57.786303   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:57.786626   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:58.286503   48339 type.go:168] "Request Body" body=""
	I1212 00:20:58.286567   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:58.286819   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:58.786175   48339 type.go:168] "Request Body" body=""
	I1212 00:20:58.786249   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:58.786577   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:59.286221   48339 type.go:168] "Request Body" body=""
	I1212 00:20:59.286300   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:59.286643   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:59.786201   48339 type.go:168] "Request Body" body=""
	I1212 00:20:59.786272   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:59.786717   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:59.786766   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:21:00.286416   48339 type.go:168] "Request Body" body=""
	I1212 00:21:00.286498   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:00.286792   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:00.786190   48339 type.go:168] "Request Body" body=""
	I1212 00:21:00.786269   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:00.786582   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:01.286121   48339 type.go:168] "Request Body" body=""
	I1212 00:21:01.286194   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:01.286449   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:01.786199   48339 type.go:168] "Request Body" body=""
	I1212 00:21:01.786294   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:01.786641   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:02.286227   48339 type.go:168] "Request Body" body=""
	I1212 00:21:02.286306   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:02.286637   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:21:02.286688   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:21:02.786377   48339 type.go:168] "Request Body" body=""
	I1212 00:21:02.786458   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:02.786789   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:03.286595   48339 type.go:168] "Request Body" body=""
	I1212 00:21:03.286680   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:03.287072   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:03.786847   48339 type.go:168] "Request Body" body=""
	I1212 00:21:03.786925   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:03.787257   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:04.287036   48339 type.go:168] "Request Body" body=""
	I1212 00:21:04.287108   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:04.287431   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:21:04.287477   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:21:04.786103   48339 type.go:168] "Request Body" body=""
	I1212 00:21:04.786178   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:04.786510   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:05.286213   48339 type.go:168] "Request Body" body=""
	I1212 00:21:05.286293   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:05.286653   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:05.786170   48339 type.go:168] "Request Body" body=""
	I1212 00:21:05.786239   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:05.786497   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:06.286230   48339 type.go:168] "Request Body" body=""
	I1212 00:21:06.286305   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:06.286647   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:06.786360   48339 type.go:168] "Request Body" body=""
	I1212 00:21:06.786435   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:06.786771   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:21:06.786825   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:21:07.286459   48339 type.go:168] "Request Body" body=""
	I1212 00:21:07.286536   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:07.286784   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:07.786179   48339 type.go:168] "Request Body" body=""
	I1212 00:21:07.786260   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:07.786613   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:08.286429   48339 type.go:168] "Request Body" body=""
	I1212 00:21:08.286512   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:08.286882   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:08.786159   48339 type.go:168] "Request Body" body=""
	I1212 00:21:08.786230   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:08.791780   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	W1212 00:21:08.791841   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:21:09.286491   48339 type.go:168] "Request Body" body=""
	I1212 00:21:09.286564   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:09.286869   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:09.786199   48339 type.go:168] "Request Body" body=""
	I1212 00:21:09.786280   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:09.786589   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:10.286143   48339 type.go:168] "Request Body" body=""
	I1212 00:21:10.286219   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:10.286481   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:10.786184   48339 type.go:168] "Request Body" body=""
	I1212 00:21:10.786253   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:10.786584   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:11.286268   48339 type.go:168] "Request Body" body=""
	I1212 00:21:11.286353   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:11.286684   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:21:11.286736   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:21:11.786169   48339 type.go:168] "Request Body" body=""
	I1212 00:21:11.786241   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:11.786538   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:12.286254   48339 type.go:168] "Request Body" body=""
	I1212 00:21:12.286329   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:12.286629   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:12.786499   48339 type.go:168] "Request Body" body=""
	I1212 00:21:12.786576   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:12.786914   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:13.286651   48339 type.go:168] "Request Body" body=""
	I1212 00:21:13.286728   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:13.286985   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:21:13.287050   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:21:13.786749   48339 type.go:168] "Request Body" body=""
	I1212 00:21:13.786806   48339 node_ready.go:38] duration metric: took 6m0.00081197s for node "functional-767012" to be "Ready" ...
	I1212 00:21:13.789905   48339 out.go:203] 
	W1212 00:21:13.792750   48339 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1212 00:21:13.792769   48339 out.go:285] * 
	W1212 00:21:13.794879   48339 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:21:13.797575   48339 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 12 00:21:21 functional-767012 containerd[5228]: time="2025-12-12T00:21:21.326584300Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 00:21:22 functional-767012 containerd[5228]: time="2025-12-12T00:21:22.426399597Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 12 00:21:22 functional-767012 containerd[5228]: time="2025-12-12T00:21:22.429099624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 12 00:21:22 functional-767012 containerd[5228]: time="2025-12-12T00:21:22.436695899Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 00:21:22 functional-767012 containerd[5228]: time="2025-12-12T00:21:22.437190344Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 00:21:23 functional-767012 containerd[5228]: time="2025-12-12T00:21:23.376992857Z" level=info msg="No images store for sha256:70ac7e50cd2bc79b9d4a21c7c3336a342085c395cbc060852cb7a1a27478be50"
	Dec 12 00:21:23 functional-767012 containerd[5228]: time="2025-12-12T00:21:23.379248081Z" level=info msg="ImageCreate event name:\"docker.io/library/minikube-local-cache-test:functional-767012\""
	Dec 12 00:21:23 functional-767012 containerd[5228]: time="2025-12-12T00:21:23.391823869Z" level=info msg="ImageCreate event name:\"sha256:2af6a2f60c44ae40a2b1bc226758dd0a3c3f1c0d22fd7d74035513945443e825\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 00:21:23 functional-767012 containerd[5228]: time="2025-12-12T00:21:23.392117509Z" level=info msg="ImageUpdate event name:\"docker.io/library/minikube-local-cache-test:functional-767012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 00:21:24 functional-767012 containerd[5228]: time="2025-12-12T00:21:24.146206177Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\""
	Dec 12 00:21:24 functional-767012 containerd[5228]: time="2025-12-12T00:21:24.148641398Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:latest\""
	Dec 12 00:21:24 functional-767012 containerd[5228]: time="2025-12-12T00:21:24.151623413Z" level=info msg="ImageDelete event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\""
	Dec 12 00:21:24 functional-767012 containerd[5228]: time="2025-12-12T00:21:24.162436159Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\" returns successfully"
	Dec 12 00:21:25 functional-767012 containerd[5228]: time="2025-12-12T00:21:25.097681168Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\""
	Dec 12 00:21:25 functional-767012 containerd[5228]: time="2025-12-12T00:21:25.100430566Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.1\""
	Dec 12 00:21:25 functional-767012 containerd[5228]: time="2025-12-12T00:21:25.102367887Z" level=info msg="ImageDelete event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\""
	Dec 12 00:21:25 functional-767012 containerd[5228]: time="2025-12-12T00:21:25.110297605Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\" returns successfully"
	Dec 12 00:21:25 functional-767012 containerd[5228]: time="2025-12-12T00:21:25.290393375Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 12 00:21:25 functional-767012 containerd[5228]: time="2025-12-12T00:21:25.292904978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 12 00:21:25 functional-767012 containerd[5228]: time="2025-12-12T00:21:25.303585981Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 00:21:25 functional-767012 containerd[5228]: time="2025-12-12T00:21:25.303967252Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 00:21:25 functional-767012 containerd[5228]: time="2025-12-12T00:21:25.424210519Z" level=info msg="No images store for sha256:3ac89611d5efd8eb74174b1f04c33b7e73b651cec35b5498caf0cfdd2efd7d48"
	Dec 12 00:21:25 functional-767012 containerd[5228]: time="2025-12-12T00:21:25.426335395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.1\""
	Dec 12 00:21:25 functional-767012 containerd[5228]: time="2025-12-12T00:21:25.434299796Z" level=info msg="ImageCreate event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 00:21:25 functional-767012 containerd[5228]: time="2025-12-12T00:21:25.434898537Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:21:27.156001    9190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:21:27.156612    9190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:21:27.158476    9190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:21:27.158883    9190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:21:27.160532    9190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec11 23:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014465] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.504479] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.038126] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.726220] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.947343] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 00:21:27 up  1:03,  0 user,  load average: 0.73, 0.38, 0.55
	Linux functional-767012 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 00:21:23 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:21:24 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 823.
	Dec 12 00:21:24 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:21:24 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:21:24 functional-767012 kubelet[8955]: E1212 00:21:24.531624    8955 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:21:24 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:21:24 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:21:25 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 824.
	Dec 12 00:21:25 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:21:25 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:21:25 functional-767012 kubelet[9039]: E1212 00:21:25.300105    9039 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:21:25 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:21:25 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:21:26 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 825.
	Dec 12 00:21:26 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:21:26 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:21:26 functional-767012 kubelet[9081]: E1212 00:21:26.097181    9081 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:21:26 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:21:26 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:21:26 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 826.
	Dec 12 00:21:26 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:21:26 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:21:26 functional-767012 kubelet[9109]: E1212 00:21:26.844490    9109 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:21:26 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:21:26 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-767012 -n functional-767012
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-767012 -n functional-767012: exit status 2 (362.225872ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-767012" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-767012 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-767012 get pods: exit status 1 (98.226224ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-767012 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-767012
helpers_test.go:244: (dbg) docker inspect functional-767012:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e",
	        "Created": "2025-12-12T00:06:52.261765556Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42951,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T00:06:52.317917194Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/hostname",
	        "HostsPath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/hosts",
	        "LogPath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e-json.log",
	        "Name": "/functional-767012",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-767012:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-767012",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e",
	                "LowerDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70-init/diff:/var/lib/docker/overlay2/4c0e5370e4fd7b4e6c6a79620ef377d7d55826709cd277e0cfa49c6005af0314/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-767012",
	                "Source": "/var/lib/docker/volumes/functional-767012/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-767012",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-767012",
	                "name.minikube.sigs.k8s.io": "functional-767012",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e781257da3adf1d3284ab2a6de0168c3db7957f25a7e53d0015250294193762d",
	            "SandboxKey": "/var/run/docker/netns/e781257da3ad",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-767012": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:4d:78:ba:7d:83",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "83467cc4cb13818b98ec0d7cb5fc0064ea6eb2c8db4256a8a81330921aa2d9a4",
	                    "EndpointID": "b787b732d8d748776ceeb6e65fab51cc1e79758446bc85ac20043b35593fab12",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-767012",
	                        "6585a82fe5e6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-767012 -n functional-767012
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-767012 -n functional-767012: exit status 2 (338.880908ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-095481 image ls --format yaml --alsologtostderr                                                                                              │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image   │ functional-095481 image ls --format short --alsologtostderr                                                                                             │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image   │ functional-095481 image ls --format table --alsologtostderr                                                                                             │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image   │ functional-095481 image ls --format json --alsologtostderr                                                                                              │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ ssh     │ functional-095481 ssh pgrep buildkitd                                                                                                                   │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │                     │
	│ image   │ functional-095481 image build -t localhost/my-image:functional-095481 testdata/build --alsologtostderr                                                  │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image   │ functional-095481 image ls                                                                                                                              │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ delete  │ -p functional-095481                                                                                                                                    │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ start   │ -p functional-767012 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │                     │
	│ start   │ -p functional-767012 --alsologtostderr -v=8                                                                                                             │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:15 UTC │                     │
	│ cache   │ functional-767012 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ cache   │ functional-767012 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ cache   │ functional-767012 cache add registry.k8s.io/pause:latest                                                                                                │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ cache   │ functional-767012 cache add minikube-local-cache-test:functional-767012                                                                                 │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ cache   │ functional-767012 cache delete minikube-local-cache-test:functional-767012                                                                              │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ cache   │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ ssh     │ functional-767012 ssh sudo crictl images                                                                                                                │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ ssh     │ functional-767012 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ ssh     │ functional-767012 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │                     │
	│ cache   │ functional-767012 cache reload                                                                                                                          │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ ssh     │ functional-767012 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ kubectl │ functional-767012 kubectl -- --context functional-767012 get pods                                                                                       │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:15:08
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:15:08.188216   48339 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:15:08.188435   48339 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:15:08.188463   48339 out.go:374] Setting ErrFile to fd 2...
	I1212 00:15:08.188485   48339 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:15:08.188893   48339 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	I1212 00:15:08.189436   48339 out.go:368] Setting JSON to false
	I1212 00:15:08.190327   48339 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3455,"bootTime":1765495054,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1212 00:15:08.190468   48339 start.go:143] virtualization:  
	I1212 00:15:08.194075   48339 out.go:179] * [functional-767012] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 00:15:08.197745   48339 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:15:08.197889   48339 notify.go:221] Checking for updates...
	I1212 00:15:08.203623   48339 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:15:08.206559   48339 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 00:15:08.209313   48339 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	I1212 00:15:08.212202   48339 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:15:08.215231   48339 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:15:08.218454   48339 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 00:15:08.218601   48339 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:15:08.244528   48339 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 00:15:08.244655   48339 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:15:08.299617   48339 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 00:15:08.290252755 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:15:08.299730   48339 docker.go:319] overlay module found
	I1212 00:15:08.302863   48339 out.go:179] * Using the docker driver based on existing profile
	I1212 00:15:08.305730   48339 start.go:309] selected driver: docker
	I1212 00:15:08.305754   48339 start.go:927] validating driver "docker" against &{Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:15:08.305854   48339 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:15:08.305953   48339 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:15:08.359436   48339 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 00:15:08.349975764 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:15:08.359860   48339 cni.go:84] Creating CNI manager for ""
	I1212 00:15:08.359920   48339 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 00:15:08.359966   48339 start.go:353] cluster config:
	{Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:15:08.363136   48339 out.go:179] * Starting "functional-767012" primary control-plane node in "functional-767012" cluster
	I1212 00:15:08.365917   48339 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1212 00:15:08.368829   48339 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 00:15:08.371809   48339 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 00:15:08.371858   48339 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1212 00:15:08.371872   48339 cache.go:65] Caching tarball of preloaded images
	I1212 00:15:08.371970   48339 preload.go:238] Found /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1212 00:15:08.371992   48339 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1212 00:15:08.372099   48339 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/config.json ...
	I1212 00:15:08.372328   48339 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 00:15:08.391509   48339 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 00:15:08.391533   48339 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 00:15:08.391552   48339 cache.go:243] Successfully downloaded all kic artifacts
	I1212 00:15:08.391583   48339 start.go:360] acquireMachinesLock for functional-767012: {Name:mk41cf89e93a3830367886ebbef2bb8f6e99e3f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:15:08.391643   48339 start.go:364] duration metric: took 36.464µs to acquireMachinesLock for "functional-767012"
	I1212 00:15:08.391666   48339 start.go:96] Skipping create...Using existing machine configuration
	I1212 00:15:08.391675   48339 fix.go:54] fixHost starting: 
	I1212 00:15:08.391939   48339 cli_runner.go:164] Run: docker container inspect functional-767012 --format={{.State.Status}}
	I1212 00:15:08.408717   48339 fix.go:112] recreateIfNeeded on functional-767012: state=Running err=<nil>
	W1212 00:15:08.408748   48339 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 00:15:08.411849   48339 out.go:252] * Updating the running docker "functional-767012" container ...
	I1212 00:15:08.411881   48339 machine.go:94] provisionDockerMachine start ...
	I1212 00:15:08.411961   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:08.429482   48339 main.go:143] libmachine: Using SSH client type: native
	I1212 00:15:08.429817   48339 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 00:15:08.429834   48339 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 00:15:08.578648   48339 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-767012
	
	I1212 00:15:08.578671   48339 ubuntu.go:182] provisioning hostname "functional-767012"
	I1212 00:15:08.578741   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:08.596871   48339 main.go:143] libmachine: Using SSH client type: native
	I1212 00:15:08.597187   48339 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 00:15:08.597227   48339 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-767012 && echo "functional-767012" | sudo tee /etc/hostname
	I1212 00:15:08.759668   48339 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-767012
	
	I1212 00:15:08.759746   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:08.776780   48339 main.go:143] libmachine: Using SSH client type: native
	I1212 00:15:08.777096   48339 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 00:15:08.777119   48339 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-767012' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-767012/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-767012' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:15:08.931523   48339 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:15:08.931550   48339 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-2343/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-2343/.minikube}
	I1212 00:15:08.931582   48339 ubuntu.go:190] setting up certificates
	I1212 00:15:08.931592   48339 provision.go:84] configureAuth start
	I1212 00:15:08.931653   48339 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-767012
	I1212 00:15:08.952406   48339 provision.go:143] copyHostCerts
	I1212 00:15:08.952454   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem
	I1212 00:15:08.952497   48339 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem, removing ...
	I1212 00:15:08.952507   48339 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem
	I1212 00:15:08.952585   48339 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem (1675 bytes)
	I1212 00:15:08.952685   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem
	I1212 00:15:08.952707   48339 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem, removing ...
	I1212 00:15:08.952712   48339 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem
	I1212 00:15:08.952745   48339 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem (1082 bytes)
	I1212 00:15:08.952800   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem
	I1212 00:15:08.952821   48339 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem, removing ...
	I1212 00:15:08.952828   48339 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem
	I1212 00:15:08.952852   48339 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem (1123 bytes)
	I1212 00:15:08.952913   48339 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem org=jenkins.functional-767012 san=[127.0.0.1 192.168.49.2 functional-767012 localhost minikube]
	I1212 00:15:09.089842   48339 provision.go:177] copyRemoteCerts
	I1212 00:15:09.089908   48339 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:15:09.089956   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:09.108065   48339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:15:09.210645   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 00:15:09.210700   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 00:15:09.228116   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 00:15:09.228176   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 00:15:09.245824   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 00:15:09.245889   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:15:09.263086   48339 provision.go:87] duration metric: took 331.470752ms to configureAuth
	I1212 00:15:09.263116   48339 ubuntu.go:206] setting minikube options for container-runtime
	I1212 00:15:09.263293   48339 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 00:15:09.263306   48339 machine.go:97] duration metric: took 851.418761ms to provisionDockerMachine
	I1212 00:15:09.263315   48339 start.go:293] postStartSetup for "functional-767012" (driver="docker")
	I1212 00:15:09.263326   48339 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:15:09.263390   48339 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:15:09.263439   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:09.281753   48339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:15:09.386868   48339 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:15:09.390421   48339 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1212 00:15:09.390442   48339 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1212 00:15:09.390447   48339 command_runner.go:130] > VERSION_ID="12"
	I1212 00:15:09.390451   48339 command_runner.go:130] > VERSION="12 (bookworm)"
	I1212 00:15:09.390456   48339 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1212 00:15:09.390460   48339 command_runner.go:130] > ID=debian
	I1212 00:15:09.390464   48339 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1212 00:15:09.390469   48339 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1212 00:15:09.390475   48339 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1212 00:15:09.390546   48339 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:15:09.390568   48339 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 00:15:09.390580   48339 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/addons for local assets ...
	I1212 00:15:09.390640   48339 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/files for local assets ...
	I1212 00:15:09.390732   48339 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem -> 42902.pem in /etc/ssl/certs
	I1212 00:15:09.390742   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem -> /etc/ssl/certs/42902.pem
	I1212 00:15:09.390816   48339 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/test/nested/copy/4290/hosts -> hosts in /etc/test/nested/copy/4290
	I1212 00:15:09.390824   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/test/nested/copy/4290/hosts -> /etc/test/nested/copy/4290/hosts
	I1212 00:15:09.390867   48339 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4290
	I1212 00:15:09.398526   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /etc/ssl/certs/42902.pem (1708 bytes)
	I1212 00:15:09.416059   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/test/nested/copy/4290/hosts --> /etc/test/nested/copy/4290/hosts (40 bytes)
	I1212 00:15:09.433237   48339 start.go:296] duration metric: took 169.908089ms for postStartSetup
	I1212 00:15:09.433321   48339 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:15:09.433384   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:09.450800   48339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:15:09.556105   48339 command_runner.go:130] > 14%
	I1212 00:15:09.557034   48339 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:15:09.562380   48339 command_runner.go:130] > 169G
	I1212 00:15:09.562946   48339 fix.go:56] duration metric: took 1.171267005s for fixHost
	I1212 00:15:09.562967   48339 start.go:83] releasing machines lock for "functional-767012", held for 1.171312429s
	I1212 00:15:09.563050   48339 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-767012
	I1212 00:15:09.582602   48339 ssh_runner.go:195] Run: cat /version.json
	I1212 00:15:09.582654   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:09.582889   48339 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:15:09.582947   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:09.601106   48339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:15:09.627042   48339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:15:09.706722   48339 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1212 00:15:09.706847   48339 ssh_runner.go:195] Run: systemctl --version
	I1212 00:15:09.800321   48339 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 00:15:09.800390   48339 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1212 00:15:09.800423   48339 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1212 00:15:09.800514   48339 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 00:15:09.804624   48339 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 00:15:09.804945   48339 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:15:09.805036   48339 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:15:09.812955   48339 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 00:15:09.813030   48339 start.go:496] detecting cgroup driver to use...
	I1212 00:15:09.813095   48339 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 00:15:09.813242   48339 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 00:15:09.829352   48339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 00:15:09.842558   48339 docker.go:218] disabling cri-docker service (if available) ...
	I1212 00:15:09.842620   48339 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:15:09.858553   48339 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:15:09.872251   48339 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:15:10.008398   48339 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:15:10.140361   48339 docker.go:234] disabling docker service ...
	I1212 00:15:10.140425   48339 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:15:10.156860   48339 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:15:10.170461   48339 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:15:10.304156   48339 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:15:10.452566   48339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:15:10.465745   48339 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:15:10.479553   48339 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1212 00:15:10.480868   48339 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 00:15:10.489677   48339 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 00:15:10.498827   48339 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 00:15:10.498939   48339 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 00:15:10.508103   48339 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 00:15:10.516726   48339 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 00:15:10.525281   48339 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 00:15:10.533906   48339 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:15:10.541697   48339 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 00:15:10.550595   48339 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 00:15:10.559645   48339 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 00:15:10.568588   48339 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:15:10.575412   48339 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 00:15:10.576366   48339 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:15:10.583788   48339 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:15:10.698857   48339 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 00:15:10.837222   48339 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1212 00:15:10.837316   48339 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1212 00:15:10.841505   48339 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1212 00:15:10.841543   48339 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 00:15:10.841551   48339 command_runner.go:130] > Device: 0,72	Inode: 1614        Links: 1
	I1212 00:15:10.841558   48339 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 00:15:10.841564   48339 command_runner.go:130] > Access: 2025-12-12 00:15:10.793315522 +0000
	I1212 00:15:10.841569   48339 command_runner.go:130] > Modify: 2025-12-12 00:15:10.793315522 +0000
	I1212 00:15:10.841575   48339 command_runner.go:130] > Change: 2025-12-12 00:15:10.793315522 +0000
	I1212 00:15:10.841583   48339 command_runner.go:130] >  Birth: -
	I1212 00:15:10.841612   48339 start.go:564] Will wait 60s for crictl version
	I1212 00:15:10.841667   48339 ssh_runner.go:195] Run: which crictl
	I1212 00:15:10.845418   48339 command_runner.go:130] > /usr/local/bin/crictl
	I1212 00:15:10.845528   48339 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 00:15:10.867684   48339 command_runner.go:130] > Version:  0.1.0
	I1212 00:15:10.867710   48339 command_runner.go:130] > RuntimeName:  containerd
	I1212 00:15:10.867718   48339 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1212 00:15:10.867725   48339 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 00:15:10.869691   48339 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1212 00:15:10.869761   48339 ssh_runner.go:195] Run: containerd --version
	I1212 00:15:10.889630   48339 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1212 00:15:10.891644   48339 ssh_runner.go:195] Run: containerd --version
	I1212 00:15:10.909520   48339 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1212 00:15:10.917318   48339 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1212 00:15:10.920211   48339 cli_runner.go:164] Run: docker network inspect functional-767012 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:15:10.936971   48339 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 00:15:10.940949   48339 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1212 00:15:10.941183   48339 kubeadm.go:884] updating cluster {Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:15:10.941314   48339 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 00:15:10.941401   48339 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:15:10.964902   48339 command_runner.go:130] > {
	I1212 00:15:10.964923   48339 command_runner.go:130] >   "images":  [
	I1212 00:15:10.964934   48339 command_runner.go:130] >     {
	I1212 00:15:10.964944   48339 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1212 00:15:10.964949   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.964954   48339 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1212 00:15:10.964957   48339 command_runner.go:130] >       ],
	I1212 00:15:10.964962   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.964974   48339 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1212 00:15:10.964977   48339 command_runner.go:130] >       ],
	I1212 00:15:10.964982   48339 command_runner.go:130] >       "size":  "40636774",
	I1212 00:15:10.964989   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.964994   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.965005   48339 command_runner.go:130] >     },
	I1212 00:15:10.965009   48339 command_runner.go:130] >     {
	I1212 00:15:10.965017   48339 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1212 00:15:10.965023   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.965029   48339 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 00:15:10.965032   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965036   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.965047   48339 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1212 00:15:10.965050   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965054   48339 command_runner.go:130] >       "size":  "8034419",
	I1212 00:15:10.965058   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.965062   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.965068   48339 command_runner.go:130] >     },
	I1212 00:15:10.965071   48339 command_runner.go:130] >     {
	I1212 00:15:10.965079   48339 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1212 00:15:10.965085   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.965092   48339 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1212 00:15:10.965095   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965101   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.965112   48339 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1212 00:15:10.965115   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965121   48339 command_runner.go:130] >       "size":  "21168808",
	I1212 00:15:10.965129   48339 command_runner.go:130] >       "username":  "nonroot",
	I1212 00:15:10.965134   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.965137   48339 command_runner.go:130] >     },
	I1212 00:15:10.965143   48339 command_runner.go:130] >     {
	I1212 00:15:10.965152   48339 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1212 00:15:10.965164   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.965169   48339 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1212 00:15:10.965172   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965176   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.965190   48339 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1212 00:15:10.965193   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965199   48339 command_runner.go:130] >       "size":  "21136588",
	I1212 00:15:10.965203   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.965218   48339 command_runner.go:130] >         "value":  "0"
	I1212 00:15:10.965224   48339 command_runner.go:130] >       },
	I1212 00:15:10.965228   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.965231   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.965235   48339 command_runner.go:130] >     },
	I1212 00:15:10.965238   48339 command_runner.go:130] >     {
	I1212 00:15:10.965245   48339 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1212 00:15:10.965251   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.965256   48339 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1212 00:15:10.965262   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965266   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.965274   48339 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1212 00:15:10.965278   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965285   48339 command_runner.go:130] >       "size":  "24678359",
	I1212 00:15:10.965288   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.965296   48339 command_runner.go:130] >         "value":  "0"
	I1212 00:15:10.965302   48339 command_runner.go:130] >       },
	I1212 00:15:10.965306   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.965311   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.965314   48339 command_runner.go:130] >     },
	I1212 00:15:10.965323   48339 command_runner.go:130] >     {
	I1212 00:15:10.965332   48339 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1212 00:15:10.965345   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.965350   48339 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1212 00:15:10.965354   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965358   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.965373   48339 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1212 00:15:10.965377   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965381   48339 command_runner.go:130] >       "size":  "20661043",
	I1212 00:15:10.965385   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.965392   48339 command_runner.go:130] >         "value":  "0"
	I1212 00:15:10.965395   48339 command_runner.go:130] >       },
	I1212 00:15:10.965399   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.965403   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.965406   48339 command_runner.go:130] >     },
	I1212 00:15:10.965412   48339 command_runner.go:130] >     {
	I1212 00:15:10.965420   48339 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1212 00:15:10.965426   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.965431   48339 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1212 00:15:10.965434   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965438   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.965446   48339 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1212 00:15:10.965453   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965457   48339 command_runner.go:130] >       "size":  "22429671",
	I1212 00:15:10.965461   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.965465   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.965469   48339 command_runner.go:130] >     },
	I1212 00:15:10.965475   48339 command_runner.go:130] >     {
	I1212 00:15:10.965482   48339 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1212 00:15:10.965486   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.965492   48339 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1212 00:15:10.965497   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965502   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.965515   48339 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1212 00:15:10.965522   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965526   48339 command_runner.go:130] >       "size":  "15391364",
	I1212 00:15:10.965530   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.965534   48339 command_runner.go:130] >         "value":  "0"
	I1212 00:15:10.965539   48339 command_runner.go:130] >       },
	I1212 00:15:10.965543   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.965553   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.965556   48339 command_runner.go:130] >     },
	I1212 00:15:10.965559   48339 command_runner.go:130] >     {
	I1212 00:15:10.965566   48339 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1212 00:15:10.965570   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.965574   48339 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1212 00:15:10.965578   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965582   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.965591   48339 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1212 00:15:10.965602   48339 command_runner.go:130] >       ],
	I1212 00:15:10.965606   48339 command_runner.go:130] >       "size":  "267939",
	I1212 00:15:10.965610   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.965614   48339 command_runner.go:130] >         "value":  "65535"
	I1212 00:15:10.965617   48339 command_runner.go:130] >       },
	I1212 00:15:10.965628   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.965632   48339 command_runner.go:130] >       "pinned":  true
	I1212 00:15:10.965635   48339 command_runner.go:130] >     }
	I1212 00:15:10.965638   48339 command_runner.go:130] >   ]
	I1212 00:15:10.965640   48339 command_runner.go:130] > }
	I1212 00:15:10.968555   48339 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 00:15:10.968581   48339 containerd.go:534] Images already preloaded, skipping extraction
	I1212 00:15:10.968640   48339 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:15:10.995305   48339 command_runner.go:130] > {
	I1212 00:15:10.995329   48339 command_runner.go:130] >   "images":  [
	I1212 00:15:10.995334   48339 command_runner.go:130] >     {
	I1212 00:15:10.995344   48339 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1212 00:15:10.995349   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.995355   48339 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1212 00:15:10.995359   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995375   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.995392   48339 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1212 00:15:10.995395   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995400   48339 command_runner.go:130] >       "size":  "40636774",
	I1212 00:15:10.995404   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.995408   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.995414   48339 command_runner.go:130] >     },
	I1212 00:15:10.995418   48339 command_runner.go:130] >     {
	I1212 00:15:10.995429   48339 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1212 00:15:10.995438   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.995444   48339 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 00:15:10.995448   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995452   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.995466   48339 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1212 00:15:10.995470   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995475   48339 command_runner.go:130] >       "size":  "8034419",
	I1212 00:15:10.995483   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.995487   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.995490   48339 command_runner.go:130] >     },
	I1212 00:15:10.995493   48339 command_runner.go:130] >     {
	I1212 00:15:10.995500   48339 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1212 00:15:10.995506   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.995512   48339 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1212 00:15:10.995515   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995524   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.995536   48339 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1212 00:15:10.995540   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995544   48339 command_runner.go:130] >       "size":  "21168808",
	I1212 00:15:10.995554   48339 command_runner.go:130] >       "username":  "nonroot",
	I1212 00:15:10.995558   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.995561   48339 command_runner.go:130] >     },
	I1212 00:15:10.995564   48339 command_runner.go:130] >     {
	I1212 00:15:10.995572   48339 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1212 00:15:10.995583   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.995588   48339 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1212 00:15:10.995592   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995596   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.995603   48339 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1212 00:15:10.995611   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995615   48339 command_runner.go:130] >       "size":  "21136588",
	I1212 00:15:10.995619   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.995623   48339 command_runner.go:130] >         "value":  "0"
	I1212 00:15:10.995631   48339 command_runner.go:130] >       },
	I1212 00:15:10.995635   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.995639   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.995642   48339 command_runner.go:130] >     },
	I1212 00:15:10.995646   48339 command_runner.go:130] >     {
	I1212 00:15:10.995659   48339 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1212 00:15:10.995663   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.995678   48339 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1212 00:15:10.995687   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995692   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.995701   48339 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1212 00:15:10.995709   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995713   48339 command_runner.go:130] >       "size":  "24678359",
	I1212 00:15:10.995716   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.995727   48339 command_runner.go:130] >         "value":  "0"
	I1212 00:15:10.995734   48339 command_runner.go:130] >       },
	I1212 00:15:10.995738   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.995743   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.995746   48339 command_runner.go:130] >     },
	I1212 00:15:10.995749   48339 command_runner.go:130] >     {
	I1212 00:15:10.995756   48339 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1212 00:15:10.995762   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.995768   48339 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1212 00:15:10.995771   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995782   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.995795   48339 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1212 00:15:10.995798   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995802   48339 command_runner.go:130] >       "size":  "20661043",
	I1212 00:15:10.995811   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.995815   48339 command_runner.go:130] >         "value":  "0"
	I1212 00:15:10.995820   48339 command_runner.go:130] >       },
	I1212 00:15:10.995830   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.995834   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.995838   48339 command_runner.go:130] >     },
	I1212 00:15:10.995841   48339 command_runner.go:130] >     {
	I1212 00:15:10.995847   48339 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1212 00:15:10.995854   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.995859   48339 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1212 00:15:10.995863   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995867   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.995877   48339 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1212 00:15:10.995884   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995888   48339 command_runner.go:130] >       "size":  "22429671",
	I1212 00:15:10.995893   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.995902   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.995906   48339 command_runner.go:130] >     },
	I1212 00:15:10.995909   48339 command_runner.go:130] >     {
	I1212 00:15:10.995916   48339 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1212 00:15:10.995924   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.995929   48339 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1212 00:15:10.995933   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995937   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.995948   48339 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1212 00:15:10.995952   48339 command_runner.go:130] >       ],
	I1212 00:15:10.995956   48339 command_runner.go:130] >       "size":  "15391364",
	I1212 00:15:10.995963   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.995967   48339 command_runner.go:130] >         "value":  "0"
	I1212 00:15:10.995983   48339 command_runner.go:130] >       },
	I1212 00:15:10.995993   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.995997   48339 command_runner.go:130] >       "pinned":  false
	I1212 00:15:10.996001   48339 command_runner.go:130] >     },
	I1212 00:15:10.996004   48339 command_runner.go:130] >     {
	I1212 00:15:10.996011   48339 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1212 00:15:10.996020   48339 command_runner.go:130] >       "repoTags":  [
	I1212 00:15:10.996025   48339 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1212 00:15:10.996029   48339 command_runner.go:130] >       ],
	I1212 00:15:10.996033   48339 command_runner.go:130] >       "repoDigests":  [
	I1212 00:15:10.996046   48339 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1212 00:15:10.996053   48339 command_runner.go:130] >       ],
	I1212 00:15:10.996057   48339 command_runner.go:130] >       "size":  "267939",
	I1212 00:15:10.996061   48339 command_runner.go:130] >       "uid":  {
	I1212 00:15:10.996065   48339 command_runner.go:130] >         "value":  "65535"
	I1212 00:15:10.996074   48339 command_runner.go:130] >       },
	I1212 00:15:10.996078   48339 command_runner.go:130] >       "username":  "",
	I1212 00:15:10.996086   48339 command_runner.go:130] >       "pinned":  true
	I1212 00:15:10.996089   48339 command_runner.go:130] >     }
	I1212 00:15:10.996095   48339 command_runner.go:130] >   ]
	I1212 00:15:10.996103   48339 command_runner.go:130] > }
	I1212 00:15:10.997943   48339 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 00:15:10.997972   48339 cache_images.go:86] Images are preloaded, skipping loading
	I1212 00:15:10.997981   48339 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1212 00:15:10.998119   48339 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-767012 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:15:10.998212   48339 ssh_runner.go:195] Run: sudo crictl info
	I1212 00:15:11.021367   48339 command_runner.go:130] > {
	I1212 00:15:11.021387   48339 command_runner.go:130] >   "cniconfig": {
	I1212 00:15:11.021393   48339 command_runner.go:130] >     "Networks": [
	I1212 00:15:11.021397   48339 command_runner.go:130] >       {
	I1212 00:15:11.021403   48339 command_runner.go:130] >         "Config": {
	I1212 00:15:11.021408   48339 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1212 00:15:11.021413   48339 command_runner.go:130] >           "Name": "cni-loopback",
	I1212 00:15:11.021418   48339 command_runner.go:130] >           "Plugins": [
	I1212 00:15:11.021422   48339 command_runner.go:130] >             {
	I1212 00:15:11.021426   48339 command_runner.go:130] >               "Network": {
	I1212 00:15:11.021430   48339 command_runner.go:130] >                 "ipam": {},
	I1212 00:15:11.021438   48339 command_runner.go:130] >                 "type": "loopback"
	I1212 00:15:11.021445   48339 command_runner.go:130] >               },
	I1212 00:15:11.021450   48339 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1212 00:15:11.021457   48339 command_runner.go:130] >             }
	I1212 00:15:11.021461   48339 command_runner.go:130] >           ],
	I1212 00:15:11.021470   48339 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1212 00:15:11.021474   48339 command_runner.go:130] >         },
	I1212 00:15:11.021485   48339 command_runner.go:130] >         "IFName": "lo"
	I1212 00:15:11.021489   48339 command_runner.go:130] >       }
	I1212 00:15:11.021493   48339 command_runner.go:130] >     ],
	I1212 00:15:11.021498   48339 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1212 00:15:11.021504   48339 command_runner.go:130] >     "PluginDirs": [
	I1212 00:15:11.021509   48339 command_runner.go:130] >       "/opt/cni/bin"
	I1212 00:15:11.021514   48339 command_runner.go:130] >     ],
	I1212 00:15:11.021525   48339 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1212 00:15:11.021533   48339 command_runner.go:130] >     "Prefix": "eth"
	I1212 00:15:11.021537   48339 command_runner.go:130] >   },
	I1212 00:15:11.021540   48339 command_runner.go:130] >   "config": {
	I1212 00:15:11.021546   48339 command_runner.go:130] >     "cdiSpecDirs": [
	I1212 00:15:11.021552   48339 command_runner.go:130] >       "/etc/cdi",
	I1212 00:15:11.021558   48339 command_runner.go:130] >       "/var/run/cdi"
	I1212 00:15:11.021560   48339 command_runner.go:130] >     ],
	I1212 00:15:11.021563   48339 command_runner.go:130] >     "cni": {
	I1212 00:15:11.021567   48339 command_runner.go:130] >       "binDir": "",
	I1212 00:15:11.021571   48339 command_runner.go:130] >       "binDirs": [
	I1212 00:15:11.021574   48339 command_runner.go:130] >         "/opt/cni/bin"
	I1212 00:15:11.021577   48339 command_runner.go:130] >       ],
	I1212 00:15:11.021582   48339 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1212 00:15:11.021585   48339 command_runner.go:130] >       "confTemplate": "",
	I1212 00:15:11.021589   48339 command_runner.go:130] >       "ipPref": "",
	I1212 00:15:11.021592   48339 command_runner.go:130] >       "maxConfNum": 1,
	I1212 00:15:11.021597   48339 command_runner.go:130] >       "setupSerially": false,
	I1212 00:15:11.021601   48339 command_runner.go:130] >       "useInternalLoopback": false
	I1212 00:15:11.021604   48339 command_runner.go:130] >     },
	I1212 00:15:11.021610   48339 command_runner.go:130] >     "containerd": {
	I1212 00:15:11.021614   48339 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1212 00:15:11.021619   48339 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1212 00:15:11.021624   48339 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1212 00:15:11.021627   48339 command_runner.go:130] >       "runtimes": {
	I1212 00:15:11.021630   48339 command_runner.go:130] >         "runc": {
	I1212 00:15:11.021635   48339 command_runner.go:130] >           "ContainerAnnotations": null,
	I1212 00:15:11.021639   48339 command_runner.go:130] >           "PodAnnotations": null,
	I1212 00:15:11.021644   48339 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1212 00:15:11.021648   48339 command_runner.go:130] >           "cgroupWritable": false,
	I1212 00:15:11.021652   48339 command_runner.go:130] >           "cniConfDir": "",
	I1212 00:15:11.021656   48339 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1212 00:15:11.021664   48339 command_runner.go:130] >           "io_type": "",
	I1212 00:15:11.021670   48339 command_runner.go:130] >           "options": {
	I1212 00:15:11.021675   48339 command_runner.go:130] >             "BinaryName": "",
	I1212 00:15:11.021683   48339 command_runner.go:130] >             "CriuImagePath": "",
	I1212 00:15:11.021695   48339 command_runner.go:130] >             "CriuWorkPath": "",
	I1212 00:15:11.021703   48339 command_runner.go:130] >             "IoGid": 0,
	I1212 00:15:11.021708   48339 command_runner.go:130] >             "IoUid": 0,
	I1212 00:15:11.021712   48339 command_runner.go:130] >             "NoNewKeyring": false,
	I1212 00:15:11.021716   48339 command_runner.go:130] >             "Root": "",
	I1212 00:15:11.021723   48339 command_runner.go:130] >             "ShimCgroup": "",
	I1212 00:15:11.021728   48339 command_runner.go:130] >             "SystemdCgroup": false
	I1212 00:15:11.021734   48339 command_runner.go:130] >           },
	I1212 00:15:11.021739   48339 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1212 00:15:11.021745   48339 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1212 00:15:11.021749   48339 command_runner.go:130] >           "runtimePath": "",
	I1212 00:15:11.021755   48339 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1212 00:15:11.021761   48339 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1212 00:15:11.021765   48339 command_runner.go:130] >           "snapshotter": ""
	I1212 00:15:11.021770   48339 command_runner.go:130] >         }
	I1212 00:15:11.021774   48339 command_runner.go:130] >       }
	I1212 00:15:11.021778   48339 command_runner.go:130] >     },
	I1212 00:15:11.021790   48339 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1212 00:15:11.021799   48339 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1212 00:15:11.021805   48339 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1212 00:15:11.021810   48339 command_runner.go:130] >     "disableApparmor": false,
	I1212 00:15:11.021816   48339 command_runner.go:130] >     "disableHugetlbController": true,
	I1212 00:15:11.021821   48339 command_runner.go:130] >     "disableProcMount": false,
	I1212 00:15:11.021825   48339 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1212 00:15:11.021828   48339 command_runner.go:130] >     "enableCDI": true,
	I1212 00:15:11.021832   48339 command_runner.go:130] >     "enableSelinux": false,
	I1212 00:15:11.021840   48339 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1212 00:15:11.021845   48339 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1212 00:15:11.021852   48339 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1212 00:15:11.021858   48339 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1212 00:15:11.021868   48339 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1212 00:15:11.021873   48339 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1212 00:15:11.021877   48339 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1212 00:15:11.021886   48339 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1212 00:15:11.021890   48339 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1212 00:15:11.021896   48339 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1212 00:15:11.021901   48339 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1212 00:15:11.021907   48339 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1212 00:15:11.021910   48339 command_runner.go:130] >   },
	I1212 00:15:11.021914   48339 command_runner.go:130] >   "features": {
	I1212 00:15:11.021919   48339 command_runner.go:130] >     "supplemental_groups_policy": true
	I1212 00:15:11.021922   48339 command_runner.go:130] >   },
	I1212 00:15:11.021926   48339 command_runner.go:130] >   "golang": "go1.24.9",
	I1212 00:15:11.021938   48339 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1212 00:15:11.021951   48339 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1212 00:15:11.021954   48339 command_runner.go:130] >   "runtimeHandlers": [
	I1212 00:15:11.021957   48339 command_runner.go:130] >     {
	I1212 00:15:11.021961   48339 command_runner.go:130] >       "features": {
	I1212 00:15:11.021973   48339 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1212 00:15:11.021977   48339 command_runner.go:130] >         "user_namespaces": true
	I1212 00:15:11.021984   48339 command_runner.go:130] >       }
	I1212 00:15:11.021991   48339 command_runner.go:130] >     },
	I1212 00:15:11.021996   48339 command_runner.go:130] >     {
	I1212 00:15:11.022000   48339 command_runner.go:130] >       "features": {
	I1212 00:15:11.022006   48339 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1212 00:15:11.022013   48339 command_runner.go:130] >         "user_namespaces": true
	I1212 00:15:11.022016   48339 command_runner.go:130] >       },
	I1212 00:15:11.022021   48339 command_runner.go:130] >       "name": "runc"
	I1212 00:15:11.022026   48339 command_runner.go:130] >     }
	I1212 00:15:11.022029   48339 command_runner.go:130] >   ],
	I1212 00:15:11.022033   48339 command_runner.go:130] >   "status": {
	I1212 00:15:11.022045   48339 command_runner.go:130] >     "conditions": [
	I1212 00:15:11.022048   48339 command_runner.go:130] >       {
	I1212 00:15:11.022055   48339 command_runner.go:130] >         "message": "",
	I1212 00:15:11.022059   48339 command_runner.go:130] >         "reason": "",
	I1212 00:15:11.022065   48339 command_runner.go:130] >         "status": true,
	I1212 00:15:11.022070   48339 command_runner.go:130] >         "type": "RuntimeReady"
	I1212 00:15:11.022073   48339 command_runner.go:130] >       },
	I1212 00:15:11.022076   48339 command_runner.go:130] >       {
	I1212 00:15:11.022083   48339 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1212 00:15:11.022087   48339 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1212 00:15:11.022094   48339 command_runner.go:130] >         "status": false,
	I1212 00:15:11.022099   48339 command_runner.go:130] >         "type": "NetworkReady"
	I1212 00:15:11.022104   48339 command_runner.go:130] >       },
	I1212 00:15:11.022107   48339 command_runner.go:130] >       {
	I1212 00:15:11.022132   48339 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1212 00:15:11.022141   48339 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1212 00:15:11.022149   48339 command_runner.go:130] >         "status": false,
	I1212 00:15:11.022155   48339 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1212 00:15:11.022158   48339 command_runner.go:130] >       }
	I1212 00:15:11.022161   48339 command_runner.go:130] >     ]
	I1212 00:15:11.022164   48339 command_runner.go:130] >   }
	I1212 00:15:11.022166   48339 command_runner.go:130] > }
	I1212 00:15:11.024522   48339 cni.go:84] Creating CNI manager for ""
	I1212 00:15:11.024547   48339 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 00:15:11.024564   48339 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 00:15:11.024607   48339 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-767012 NodeName:functional-767012 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:15:11.024773   48339 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-767012"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:15:11.024850   48339 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 00:15:11.031979   48339 command_runner.go:130] > kubeadm
	I1212 00:15:11.031999   48339 command_runner.go:130] > kubectl
	I1212 00:15:11.032004   48339 command_runner.go:130] > kubelet
	I1212 00:15:11.033031   48339 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 00:15:11.033131   48339 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:15:11.041032   48339 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1212 00:15:11.054723   48339 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 00:15:11.067854   48339 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1212 00:15:11.081373   48339 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:15:11.085014   48339 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1212 00:15:11.085116   48339 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:15:11.226173   48339 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:15:12.035778   48339 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012 for IP: 192.168.49.2
	I1212 00:15:12.035798   48339 certs.go:195] generating shared ca certs ...
	I1212 00:15:12.035830   48339 certs.go:227] acquiring lock for ca certs: {Name:mk18ed2fce74cbc4ee01c0f71e2dbdd98ccce1cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:15:12.035967   48339 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key
	I1212 00:15:12.036010   48339 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key
	I1212 00:15:12.036017   48339 certs.go:257] generating profile certs ...
	I1212 00:15:12.036117   48339 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.key
	I1212 00:15:12.036165   48339 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.key.fcbff5a4
	I1212 00:15:12.036201   48339 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.key
	I1212 00:15:12.036209   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 00:15:12.036224   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 00:15:12.036235   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 00:15:12.036248   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 00:15:12.036258   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 00:15:12.036270   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 00:15:12.036281   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 00:15:12.036294   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 00:15:12.036341   48339 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem (1338 bytes)
	W1212 00:15:12.036372   48339 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290_empty.pem, impossibly tiny 0 bytes
	I1212 00:15:12.036381   48339 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 00:15:12.036409   48339 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem (1082 bytes)
	I1212 00:15:12.036440   48339 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:15:12.036468   48339 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem (1675 bytes)
	I1212 00:15:12.036516   48339 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem (1708 bytes)
	I1212 00:15:12.036546   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem -> /usr/share/ca-certificates/4290.pem
	I1212 00:15:12.036558   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem -> /usr/share/ca-certificates/42902.pem
	I1212 00:15:12.036578   48339 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:15:12.037134   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:15:12.059224   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 00:15:12.079145   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:15:12.096868   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:15:12.114531   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 00:15:12.132828   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 00:15:12.150161   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:15:12.168014   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:15:12.185251   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem --> /usr/share/ca-certificates/4290.pem (1338 bytes)
	I1212 00:15:12.202557   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /usr/share/ca-certificates/42902.pem (1708 bytes)
	I1212 00:15:12.219625   48339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:15:12.237574   48339 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:15:12.250472   48339 ssh_runner.go:195] Run: openssl version
	I1212 00:15:12.256541   48339 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1212 00:15:12.256947   48339 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4290.pem
	I1212 00:15:12.264387   48339 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4290.pem /etc/ssl/certs/4290.pem
	I1212 00:15:12.271688   48339 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4290.pem
	I1212 00:15:12.275404   48339 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 00:06 /usr/share/ca-certificates/4290.pem
	I1212 00:15:12.275432   48339 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:06 /usr/share/ca-certificates/4290.pem
	I1212 00:15:12.275482   48339 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4290.pem
	I1212 00:15:12.315860   48339 command_runner.go:130] > 51391683
	I1212 00:15:12.316400   48339 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 00:15:12.323656   48339 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42902.pem
	I1212 00:15:12.330945   48339 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42902.pem /etc/ssl/certs/42902.pem
	I1212 00:15:12.339131   48339 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42902.pem
	I1212 00:15:12.343064   48339 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 00:06 /usr/share/ca-certificates/42902.pem
	I1212 00:15:12.343159   48339 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:06 /usr/share/ca-certificates/42902.pem
	I1212 00:15:12.343241   48339 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42902.pem
	I1212 00:15:12.383845   48339 command_runner.go:130] > 3ec20f2e
	I1212 00:15:12.384302   48339 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 00:15:12.391740   48339 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:15:12.398710   48339 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:15:12.406076   48339 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:15:12.409726   48339 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:15:12.409770   48339 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:15:12.409826   48339 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:15:12.450507   48339 command_runner.go:130] > b5213941
	I1212 00:15:12.450926   48339 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:15:12.458188   48339 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:15:12.461873   48339 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:15:12.461949   48339 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1212 00:15:12.461961   48339 command_runner.go:130] > Device: 259,1	Inode: 1311423     Links: 1
	I1212 00:15:12.461969   48339 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 00:15:12.461975   48339 command_runner.go:130] > Access: 2025-12-12 00:11:05.099200071 +0000
	I1212 00:15:12.461979   48339 command_runner.go:130] > Modify: 2025-12-12 00:07:00.969098600 +0000
	I1212 00:15:12.461984   48339 command_runner.go:130] > Change: 2025-12-12 00:07:00.969098600 +0000
	I1212 00:15:12.461989   48339 command_runner.go:130] >  Birth: 2025-12-12 00:07:00.969098600 +0000
	I1212 00:15:12.462077   48339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 00:15:12.504549   48339 command_runner.go:130] > Certificate will not expire
	I1212 00:15:12.505002   48339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 00:15:12.545847   48339 command_runner.go:130] > Certificate will not expire
	I1212 00:15:12.545927   48339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 00:15:12.586405   48339 command_runner.go:130] > Certificate will not expire
	I1212 00:15:12.586767   48339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 00:15:12.629151   48339 command_runner.go:130] > Certificate will not expire
	I1212 00:15:12.629637   48339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 00:15:12.671966   48339 command_runner.go:130] > Certificate will not expire
	I1212 00:15:12.672529   48339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 00:15:12.713858   48339 command_runner.go:130] > Certificate will not expire
	I1212 00:15:12.714272   48339 kubeadm.go:401] StartCluster: {Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:15:12.714367   48339 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1212 00:15:12.714442   48339 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:15:12.749902   48339 cri.go:89] found id: ""
	I1212 00:15:12.750000   48339 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:15:12.759407   48339 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1212 00:15:12.759429   48339 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1212 00:15:12.759437   48339 command_runner.go:130] > /var/lib/minikube/etcd:
	I1212 00:15:12.760379   48339 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 00:15:12.760398   48339 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 00:15:12.760457   48339 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 00:15:12.768161   48339 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:15:12.768602   48339 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-767012" does not appear in /home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 00:15:12.768706   48339 kubeconfig.go:62] /home/jenkins/minikube-integration/22101-2343/kubeconfig needs updating (will repair): [kubeconfig missing "functional-767012" cluster setting kubeconfig missing "functional-767012" context setting]
	I1212 00:15:12.769002   48339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/kubeconfig: {Name:mk3e2cbffa089f0a26351f5f6a49b8ed3b66edd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:15:12.769434   48339 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 00:15:12.769575   48339 kapi.go:59] client config for functional-767012: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt", KeyFile:"/home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.key", CAFile:"/home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 00:15:12.770098   48339 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 00:15:12.770119   48339 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 00:15:12.770125   48339 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 00:15:12.770129   48339 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 00:15:12.770134   48339 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 00:15:12.770402   48339 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 00:15:12.770508   48339 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1212 00:15:12.778529   48339 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1212 00:15:12.778562   48339 kubeadm.go:602] duration metric: took 18.158491ms to restartPrimaryControlPlane
	I1212 00:15:12.778572   48339 kubeadm.go:403] duration metric: took 64.30535ms to StartCluster
	I1212 00:15:12.778619   48339 settings.go:142] acquiring lock: {Name:mk6dd4250df69aeba4752e9f33aeef37272375c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:15:12.778710   48339 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 00:15:12.779343   48339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/kubeconfig: {Name:mk3e2cbffa089f0a26351f5f6a49b8ed3b66edd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:15:12.779578   48339 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1212 00:15:12.779758   48339 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 00:15:12.779798   48339 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 00:15:12.779860   48339 addons.go:70] Setting storage-provisioner=true in profile "functional-767012"
	I1212 00:15:12.779873   48339 addons.go:239] Setting addon storage-provisioner=true in "functional-767012"
	I1212 00:15:12.779899   48339 host.go:66] Checking if "functional-767012" exists ...
	I1212 00:15:12.780379   48339 cli_runner.go:164] Run: docker container inspect functional-767012 --format={{.State.Status}}
	I1212 00:15:12.780789   48339 addons.go:70] Setting default-storageclass=true in profile "functional-767012"
	I1212 00:15:12.780811   48339 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-767012"
	I1212 00:15:12.781090   48339 cli_runner.go:164] Run: docker container inspect functional-767012 --format={{.State.Status}}
	I1212 00:15:12.784774   48339 out.go:179] * Verifying Kubernetes components...
	I1212 00:15:12.788318   48339 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:15:12.822440   48339 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 00:15:12.822619   48339 kapi.go:59] client config for functional-767012: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt", KeyFile:"/home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.key", CAFile:"/home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 00:15:12.822882   48339 addons.go:239] Setting addon default-storageclass=true in "functional-767012"
	I1212 00:15:12.822910   48339 host.go:66] Checking if "functional-767012" exists ...
	I1212 00:15:12.823362   48339 cli_runner.go:164] Run: docker container inspect functional-767012 --format={{.State.Status}}
	I1212 00:15:12.828706   48339 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:15:12.831719   48339 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:12.831746   48339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 00:15:12.831810   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:12.856565   48339 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:12.856586   48339 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:15:12.856663   48339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:15:12.891591   48339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:15:12.907113   48339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:15:13.031282   48339 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:15:13.038860   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:13.055219   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:13.785959   48339 node_ready.go:35] waiting up to 6m0s for node "functional-767012" to be "Ready" ...
	I1212 00:15:13.786096   48339 type.go:168] "Request Body" body=""
	I1212 00:15:13.786201   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:13.786332   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:13.786513   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:13.786544   48339 retry.go:31] will retry after 252.334378ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:13.786634   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:13.786678   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:13.786692   48339 retry.go:31] will retry after 187.958053ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:13.786725   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:13.975259   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:14.039772   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:14.044477   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:14.044582   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:14.044648   48339 retry.go:31] will retry after 322.190642ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:14.103040   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:14.103100   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:14.103119   48339 retry.go:31] will retry after 449.616448ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:14.286283   48339 type.go:168] "Request Body" body=""
	I1212 00:15:14.286357   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:14.286666   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:14.367911   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:14.423058   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:14.426726   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:14.426805   48339 retry.go:31] will retry after 304.882295ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:14.552989   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:14.624219   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:14.624296   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:14.624324   48339 retry.go:31] will retry after 431.233251ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:14.732500   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:14.787073   48339 type.go:168] "Request Body" body=""
	I1212 00:15:14.787160   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:14.787408   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:14.793570   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:14.793617   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:14.793638   48339 retry.go:31] will retry after 814.242182ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:15.055819   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:15.115988   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:15.119844   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:15.119920   48339 retry.go:31] will retry after 1.173578041s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:15.287015   48339 type.go:168] "Request Body" body=""
	I1212 00:15:15.287127   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:15.287435   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:15.608995   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:15.668352   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:15.672074   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:15.672106   48339 retry.go:31] will retry after 987.735436ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:15.786224   48339 type.go:168] "Request Body" body=""
	I1212 00:15:15.786336   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:15.786676   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:15.786781   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:16.286218   48339 type.go:168] "Request Body" body=""
	I1212 00:15:16.286309   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:16.286618   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:16.293963   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:16.350242   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:16.354044   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:16.354074   48339 retry.go:31] will retry after 1.703488512s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:16.660633   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:16.720806   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:16.720847   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:16.720866   48339 retry.go:31] will retry after 1.717481089s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:16.787045   48339 type.go:168] "Request Body" body=""
	I1212 00:15:16.787165   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:16.787500   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:17.287197   48339 type.go:168] "Request Body" body=""
	I1212 00:15:17.287287   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:17.287663   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:17.786193   48339 type.go:168] "Request Body" body=""
	I1212 00:15:17.786301   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:17.786622   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:18.058032   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:18.119712   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:18.119758   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:18.119777   48339 retry.go:31] will retry after 2.564790813s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:18.286189   48339 type.go:168] "Request Body" body=""
	I1212 00:15:18.286256   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:18.286531   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:18.286571   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:18.438948   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:18.492343   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:18.495818   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:18.495853   48339 retry.go:31] will retry after 3.474173077s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:18.786235   48339 type.go:168] "Request Body" body=""
	I1212 00:15:18.786319   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:18.786633   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:19.286373   48339 type.go:168] "Request Body" body=""
	I1212 00:15:19.286489   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:19.286915   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:19.786192   48339 type.go:168] "Request Body" body=""
	I1212 00:15:19.786262   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:19.786531   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:20.286266   48339 type.go:168] "Request Body" body=""
	I1212 00:15:20.286338   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:20.286671   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:20.286730   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:20.685395   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:20.744336   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:20.744377   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:20.744397   48339 retry.go:31] will retry after 3.068053389s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:20.786556   48339 type.go:168] "Request Body" body=""
	I1212 00:15:20.786632   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:20.787017   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:21.286794   48339 type.go:168] "Request Body" body=""
	I1212 00:15:21.286863   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:21.287178   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:21.786938   48339 type.go:168] "Request Body" body=""
	I1212 00:15:21.787095   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:21.787425   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:21.970778   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:22.029300   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:22.033382   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:22.033416   48339 retry.go:31] will retry after 3.143683139s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:22.286887   48339 type.go:168] "Request Body" body=""
	I1212 00:15:22.286963   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:22.287298   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:22.287349   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:22.786122   48339 type.go:168] "Request Body" body=""
	I1212 00:15:22.786203   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:22.786515   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:23.286522   48339 type.go:168] "Request Body" body=""
	I1212 00:15:23.286595   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:23.286902   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:23.786669   48339 type.go:168] "Request Body" body=""
	I1212 00:15:23.786750   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:23.787071   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:23.813245   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:23.872447   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:23.872484   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:23.872503   48339 retry.go:31] will retry after 4.295118946s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:24.286878   48339 type.go:168] "Request Body" body=""
	I1212 00:15:24.286966   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:24.287236   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:24.787020   48339 type.go:168] "Request Body" body=""
	I1212 00:15:24.787113   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:24.787396   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:24.787455   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:25.178129   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:25.240141   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:25.243777   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:25.243806   48339 retry.go:31] will retry after 9.168145583s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:25.286119   48339 type.go:168] "Request Body" body=""
	I1212 00:15:25.286212   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:25.286559   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:25.787134   48339 type.go:168] "Request Body" body=""
	I1212 00:15:25.787314   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:25.787683   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:26.286268   48339 type.go:168] "Request Body" body=""
	I1212 00:15:26.286357   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:26.286692   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:26.786194   48339 type.go:168] "Request Body" body=""
	I1212 00:15:26.786286   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:26.786601   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:27.286932   48339 type.go:168] "Request Body" body=""
	I1212 00:15:27.287015   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:27.287267   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:27.287315   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:27.787077   48339 type.go:168] "Request Body" body=""
	I1212 00:15:27.787176   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:27.787513   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:28.168008   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:28.231881   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:28.231917   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:28.231944   48339 retry.go:31] will retry after 6.344313185s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:28.286314   48339 type.go:168] "Request Body" body=""
	I1212 00:15:28.286400   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:28.286700   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:28.786192   48339 type.go:168] "Request Body" body=""
	I1212 00:15:28.786267   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:28.786531   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:29.286238   48339 type.go:168] "Request Body" body=""
	I1212 00:15:29.286308   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:29.286623   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:29.786295   48339 type.go:168] "Request Body" body=""
	I1212 00:15:29.786368   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:29.786689   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:29.786753   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:30.287110   48339 type.go:168] "Request Body" body=""
	I1212 00:15:30.287175   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:30.287426   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:30.786872   48339 type.go:168] "Request Body" body=""
	I1212 00:15:30.786960   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:30.787297   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:31.286942   48339 type.go:168] "Request Body" body=""
	I1212 00:15:31.287032   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:31.287368   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:31.786980   48339 type.go:168] "Request Body" body=""
	I1212 00:15:31.787074   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:31.787418   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:31.787478   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:32.286186   48339 type.go:168] "Request Body" body=""
	I1212 00:15:32.286271   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:32.286599   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:32.786430   48339 type.go:168] "Request Body" body=""
	I1212 00:15:32.786534   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:32.786856   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:33.286674   48339 type.go:168] "Request Body" body=""
	I1212 00:15:33.286767   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:33.287049   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:33.786796   48339 type.go:168] "Request Body" body=""
	I1212 00:15:33.786868   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:33.787225   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:34.286903   48339 type.go:168] "Request Body" body=""
	I1212 00:15:34.287005   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:34.287348   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:34.287421   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:34.412873   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:34.471886   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:34.475429   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:34.475459   48339 retry.go:31] will retry after 5.427832253s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:34.576727   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:34.645023   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:34.645064   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:34.645084   48339 retry.go:31] will retry after 14.315988892s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:34.786162   48339 type.go:168] "Request Body" body=""
	I1212 00:15:34.786245   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:34.786506   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:35.286256   48339 type.go:168] "Request Body" body=""
	I1212 00:15:35.286369   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:35.286766   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:35.786480   48339 type.go:168] "Request Body" body=""
	I1212 00:15:35.786551   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:35.786861   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:36.286546   48339 type.go:168] "Request Body" body=""
	I1212 00:15:36.286613   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:36.286890   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:36.786199   48339 type.go:168] "Request Body" body=""
	I1212 00:15:36.786309   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:36.786640   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:36.786704   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:37.286243   48339 type.go:168] "Request Body" body=""
	I1212 00:15:37.286323   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:37.286640   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:37.786331   48339 type.go:168] "Request Body" body=""
	I1212 00:15:37.786426   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:37.786691   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:38.286739   48339 type.go:168] "Request Body" body=""
	I1212 00:15:38.286834   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:38.287212   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:38.787067   48339 type.go:168] "Request Body" body=""
	I1212 00:15:38.787165   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:38.787505   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:38.787556   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:39.286897   48339 type.go:168] "Request Body" body=""
	I1212 00:15:39.286974   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:39.287246   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:39.787072   48339 type.go:168] "Request Body" body=""
	I1212 00:15:39.787155   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:39.787481   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:39.903977   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:39.961517   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:39.961553   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:39.961584   48339 retry.go:31] will retry after 9.825060256s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:40.286904   48339 type.go:168] "Request Body" body=""
	I1212 00:15:40.287016   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:40.287324   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:40.786920   48339 type.go:168] "Request Body" body=""
	I1212 00:15:40.787007   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:40.787265   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:41.287079   48339 type.go:168] "Request Body" body=""
	I1212 00:15:41.287171   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:41.287483   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:41.287535   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:41.786177   48339 type.go:168] "Request Body" body=""
	I1212 00:15:41.786245   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:41.786586   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:42.286210   48339 type.go:168] "Request Body" body=""
	I1212 00:15:42.286304   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:42.286665   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:42.786373   48339 type.go:168] "Request Body" body=""
	I1212 00:15:42.786449   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:42.786735   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:43.286695   48339 type.go:168] "Request Body" body=""
	I1212 00:15:43.286781   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:43.287063   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:43.786792   48339 type.go:168] "Request Body" body=""
	I1212 00:15:43.786867   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:43.787142   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:43.787197   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:44.286976   48339 type.go:168] "Request Body" body=""
	I1212 00:15:44.287083   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:44.287398   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:44.786120   48339 type.go:168] "Request Body" body=""
	I1212 00:15:44.786194   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:44.786513   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:45.286282   48339 type.go:168] "Request Body" body=""
	I1212 00:15:45.286447   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:45.286824   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:45.786533   48339 type.go:168] "Request Body" body=""
	I1212 00:15:45.786632   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:45.786951   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:46.286792   48339 type.go:168] "Request Body" body=""
	I1212 00:15:46.286884   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:46.287186   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:46.287237   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:46.786874   48339 type.go:168] "Request Body" body=""
	I1212 00:15:46.786956   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:46.787268   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:47.287109   48339 type.go:168] "Request Body" body=""
	I1212 00:15:47.287201   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:47.287499   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:47.786233   48339 type.go:168] "Request Body" body=""
	I1212 00:15:47.786303   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:47.786629   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:48.286436   48339 type.go:168] "Request Body" body=""
	I1212 00:15:48.286503   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:48.286772   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:48.786216   48339 type.go:168] "Request Body" body=""
	I1212 00:15:48.786290   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:48.786671   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:48.786725   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:48.962079   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:15:49.024775   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:49.024824   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:49.024842   48339 retry.go:31] will retry after 15.053349185s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:49.286133   48339 type.go:168] "Request Body" body=""
	I1212 00:15:49.286218   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:49.286771   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:49.786188   48339 type.go:168] "Request Body" body=""
	I1212 00:15:49.786266   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:49.786639   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:49.786790   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:15:49.873069   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:15:49.873108   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:49.873126   48339 retry.go:31] will retry after 17.371130847s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:15:50.286878   48339 type.go:168] "Request Body" body=""
	I1212 00:15:50.286961   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:50.287310   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:50.787122   48339 type.go:168] "Request Body" body=""
	I1212 00:15:50.787202   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:50.787523   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:50.787579   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:51.286912   48339 type.go:168] "Request Body" body=""
	I1212 00:15:51.286981   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:51.287298   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:51.787059   48339 type.go:168] "Request Body" body=""
	I1212 00:15:51.787135   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:51.787456   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:52.286151   48339 type.go:168] "Request Body" body=""
	I1212 00:15:52.286226   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:52.286553   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:52.786336   48339 type.go:168] "Request Body" body=""
	I1212 00:15:52.786407   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:52.786699   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:53.286548   48339 type.go:168] "Request Body" body=""
	I1212 00:15:53.286619   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:53.286939   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:53.287009   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:53.786505   48339 type.go:168] "Request Body" body=""
	I1212 00:15:53.786577   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:53.786912   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:54.286719   48339 type.go:168] "Request Body" body=""
	I1212 00:15:54.286786   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:54.287059   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:54.786836   48339 type.go:168] "Request Body" body=""
	I1212 00:15:54.786933   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:54.787274   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:55.287094   48339 type.go:168] "Request Body" body=""
	I1212 00:15:55.287171   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:55.287511   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:55.287570   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:55.786152   48339 type.go:168] "Request Body" body=""
	I1212 00:15:55.786220   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:55.786474   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:56.286213   48339 type.go:168] "Request Body" body=""
	I1212 00:15:56.286312   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:56.286631   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:56.786168   48339 type.go:168] "Request Body" body=""
	I1212 00:15:56.786239   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:56.786561   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:57.287075   48339 type.go:168] "Request Body" body=""
	I1212 00:15:57.287147   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:57.287400   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:57.787153   48339 type.go:168] "Request Body" body=""
	I1212 00:15:57.787225   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:57.787534   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:15:57.787585   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:15:58.286375   48339 type.go:168] "Request Body" body=""
	I1212 00:15:58.286450   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:58.286783   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:58.786215   48339 type.go:168] "Request Body" body=""
	I1212 00:15:58.786282   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:58.786594   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:59.286241   48339 type.go:168] "Request Body" body=""
	I1212 00:15:59.286312   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:59.286622   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:15:59.786317   48339 type.go:168] "Request Body" body=""
	I1212 00:15:59.786388   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:15:59.786719   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:00.292274   48339 type.go:168] "Request Body" body=""
	I1212 00:16:00.292358   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:00.292654   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:00.292703   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:00.786205   48339 type.go:168] "Request Body" body=""
	I1212 00:16:00.786286   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:00.786644   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:01.286348   48339 type.go:168] "Request Body" body=""
	I1212 00:16:01.286432   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:01.286773   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:01.787146   48339 type.go:168] "Request Body" body=""
	I1212 00:16:01.787221   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:01.787510   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:02.286209   48339 type.go:168] "Request Body" body=""
	I1212 00:16:02.286300   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:02.286617   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:02.786467   48339 type.go:168] "Request Body" body=""
	I1212 00:16:02.786540   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:02.786883   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:02.786938   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:03.286672   48339 type.go:168] "Request Body" body=""
	I1212 00:16:03.286737   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:03.287012   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:03.786797   48339 type.go:168] "Request Body" body=""
	I1212 00:16:03.786868   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:03.787218   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:04.078782   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:16:04.137731   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:16:04.141181   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:16:04.141215   48339 retry.go:31] will retry after 17.411337884s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:16:04.286486   48339 type.go:168] "Request Body" body=""
	I1212 00:16:04.286564   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:04.286889   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:04.786185   48339 type.go:168] "Request Body" body=""
	I1212 00:16:04.786276   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:04.786662   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:05.286261   48339 type.go:168] "Request Body" body=""
	I1212 00:16:05.286336   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:05.286651   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:05.286703   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:05.786375   48339 type.go:168] "Request Body" body=""
	I1212 00:16:05.786467   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:05.786794   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:06.286188   48339 type.go:168] "Request Body" body=""
	I1212 00:16:06.286265   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:06.286589   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:06.786260   48339 type.go:168] "Request Body" body=""
	I1212 00:16:06.786341   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:06.786641   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:07.245320   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:16:07.286783   48339 type.go:168] "Request Body" body=""
	I1212 00:16:07.286895   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:07.287194   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:07.287250   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:07.304749   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:16:07.304789   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:16:07.304807   48339 retry.go:31] will retry after 24.953429831s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:16:07.787063   48339 type.go:168] "Request Body" body=""
	I1212 00:16:07.787138   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:07.787437   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:08.286404   48339 type.go:168] "Request Body" body=""
	I1212 00:16:08.286476   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:08.286783   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:08.786218   48339 type.go:168] "Request Body" body=""
	I1212 00:16:08.786293   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:08.786671   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:09.286981   48339 type.go:168] "Request Body" body=""
	I1212 00:16:09.287066   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:09.287329   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:09.287373   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:09.787100   48339 type.go:168] "Request Body" body=""
	I1212 00:16:09.787195   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:09.787534   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:10.286235   48339 type.go:168] "Request Body" body=""
	I1212 00:16:10.286321   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:10.286701   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:10.786206   48339 type.go:168] "Request Body" body=""
	I1212 00:16:10.786294   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:10.786608   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:11.286206   48339 type.go:168] "Request Body" body=""
	I1212 00:16:11.286280   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:11.286613   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:11.786187   48339 type.go:168] "Request Body" body=""
	I1212 00:16:11.786279   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:11.786620   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:11.786679   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:12.286942   48339 type.go:168] "Request Body" body=""
	I1212 00:16:12.287031   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:12.287292   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:12.786305   48339 type.go:168] "Request Body" body=""
	I1212 00:16:12.786379   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:12.786714   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:13.286636   48339 type.go:168] "Request Body" body=""
	I1212 00:16:13.286735   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:13.287061   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:13.786837   48339 type.go:168] "Request Body" body=""
	I1212 00:16:13.786905   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:13.787175   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:13.787217   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:14.286785   48339 type.go:168] "Request Body" body=""
	I1212 00:16:14.286860   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:14.287199   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:14.786985   48339 type.go:168] "Request Body" body=""
	I1212 00:16:14.787080   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:14.787391   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:15.287017   48339 type.go:168] "Request Body" body=""
	I1212 00:16:15.287092   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:15.287365   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:15.786104   48339 type.go:168] "Request Body" body=""
	I1212 00:16:15.786194   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:15.786515   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:16.286214   48339 type.go:168] "Request Body" body=""
	I1212 00:16:16.286312   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:16.286611   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:16.286662   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:16.787101   48339 type.go:168] "Request Body" body=""
	I1212 00:16:16.787177   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:16.787436   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:17.286203   48339 type.go:168] "Request Body" body=""
	I1212 00:16:17.286282   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:17.286588   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:17.786199   48339 type.go:168] "Request Body" body=""
	I1212 00:16:17.786273   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:17.786616   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:18.286463   48339 type.go:168] "Request Body" body=""
	I1212 00:16:18.286538   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:18.286889   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:18.286938   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:18.786189   48339 type.go:168] "Request Body" body=""
	I1212 00:16:18.786282   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:18.786626   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:19.286360   48339 type.go:168] "Request Body" body=""
	I1212 00:16:19.286434   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:19.286751   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:19.786162   48339 type.go:168] "Request Body" body=""
	I1212 00:16:19.786239   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:19.786514   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:20.286225   48339 type.go:168] "Request Body" body=""
	I1212 00:16:20.286301   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:20.286620   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:20.786210   48339 type.go:168] "Request Body" body=""
	I1212 00:16:20.786283   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:20.786562   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:20.786610   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:21.286154   48339 type.go:168] "Request Body" body=""
	I1212 00:16:21.286236   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:21.286508   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:21.552920   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:16:21.609312   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:16:21.612881   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:16:21.612910   48339 retry.go:31] will retry after 24.114548677s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 00:16:21.786128   48339 type.go:168] "Request Body" body=""
	I1212 00:16:21.786221   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:21.786547   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:22.286255   48339 type.go:168] "Request Body" body=""
	I1212 00:16:22.286336   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:22.286677   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:22.786457   48339 type.go:168] "Request Body" body=""
	I1212 00:16:22.786525   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:22.786771   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:22.786820   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:23.286766   48339 type.go:168] "Request Body" body=""
	I1212 00:16:23.286841   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:23.287234   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:23.787069   48339 type.go:168] "Request Body" body=""
	I1212 00:16:23.787143   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:23.787481   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:24.286171   48339 type.go:168] "Request Body" body=""
	I1212 00:16:24.286252   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:24.286533   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:24.786238   48339 type.go:168] "Request Body" body=""
	I1212 00:16:24.786310   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:24.786625   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:25.286352   48339 type.go:168] "Request Body" body=""
	I1212 00:16:25.286433   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:25.286738   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:25.286790   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:25.786139   48339 type.go:168] "Request Body" body=""
	I1212 00:16:25.786227   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:25.786511   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:26.286200   48339 type.go:168] "Request Body" body=""
	I1212 00:16:26.286292   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:26.286614   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:26.786309   48339 type.go:168] "Request Body" body=""
	I1212 00:16:26.786416   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:26.786728   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:27.286245   48339 type.go:168] "Request Body" body=""
	I1212 00:16:27.286316   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:27.286597   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:27.786283   48339 type.go:168] "Request Body" body=""
	I1212 00:16:27.786355   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:27.786690   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:27.786745   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:28.286519   48339 type.go:168] "Request Body" body=""
	I1212 00:16:28.286594   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:28.286931   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:28.786692   48339 type.go:168] "Request Body" body=""
	I1212 00:16:28.786765   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:28.787040   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:29.286807   48339 type.go:168] "Request Body" body=""
	I1212 00:16:29.286879   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:29.287246   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:29.786890   48339 type.go:168] "Request Body" body=""
	I1212 00:16:29.786966   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:29.787276   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:29.787321   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:30.287063   48339 type.go:168] "Request Body" body=""
	I1212 00:16:30.287137   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:30.287393   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:30.787118   48339 type.go:168] "Request Body" body=""
	I1212 00:16:30.787201   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:30.787551   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:31.286150   48339 type.go:168] "Request Body" body=""
	I1212 00:16:31.286271   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:31.286606   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:31.786159   48339 type.go:168] "Request Body" body=""
	I1212 00:16:31.786233   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:31.786502   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:32.259311   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:16:32.286776   48339 type.go:168] "Request Body" body=""
	I1212 00:16:32.286852   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:32.287141   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:32.287191   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:32.315690   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:16:32.319144   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:16:32.319251   48339 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 00:16:32.786146   48339 type.go:168] "Request Body" body=""
	I1212 00:16:32.786230   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:32.786571   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:33.286348   48339 type.go:168] "Request Body" body=""
	I1212 00:16:33.286423   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:33.286668   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:33.786187   48339 type.go:168] "Request Body" body=""
	I1212 00:16:33.786262   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:33.786597   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:34.286351   48339 type.go:168] "Request Body" body=""
	I1212 00:16:34.286425   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:34.286777   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:34.787084   48339 type.go:168] "Request Body" body=""
	I1212 00:16:34.787156   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:34.787405   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:34.787444   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:35.286102   48339 type.go:168] "Request Body" body=""
	I1212 00:16:35.286177   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:35.286533   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:35.786215   48339 type.go:168] "Request Body" body=""
	I1212 00:16:35.786285   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:35.786632   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:36.287087   48339 type.go:168] "Request Body" body=""
	I1212 00:16:36.287160   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:36.287418   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:36.786103   48339 type.go:168] "Request Body" body=""
	I1212 00:16:36.786193   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:36.786526   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:37.286129   48339 type.go:168] "Request Body" body=""
	I1212 00:16:37.286202   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:37.286544   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:37.286600   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:37.787026   48339 type.go:168] "Request Body" body=""
	I1212 00:16:37.787100   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:37.787357   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:38.286531   48339 type.go:168] "Request Body" body=""
	I1212 00:16:38.286611   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:38.286935   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:38.786684   48339 type.go:168] "Request Body" body=""
	I1212 00:16:38.786754   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:38.787096   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:39.286816   48339 type.go:168] "Request Body" body=""
	I1212 00:16:39.286887   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:39.287147   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:39.287187   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:39.786891   48339 type.go:168] "Request Body" body=""
	I1212 00:16:39.786969   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:39.787334   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:40.287018   48339 type.go:168] "Request Body" body=""
	I1212 00:16:40.287113   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:40.287426   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:40.786868   48339 type.go:168] "Request Body" body=""
	I1212 00:16:40.786934   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:40.787251   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:41.287087   48339 type.go:168] "Request Body" body=""
	I1212 00:16:41.287180   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:41.287508   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:41.287561   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:41.786226   48339 type.go:168] "Request Body" body=""
	I1212 00:16:41.786304   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:41.786661   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:42.286381   48339 type.go:168] "Request Body" body=""
	I1212 00:16:42.286463   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:42.286744   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:42.786456   48339 type.go:168] "Request Body" body=""
	I1212 00:16:42.786532   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:42.786873   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:43.286753   48339 type.go:168] "Request Body" body=""
	I1212 00:16:43.286834   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:43.287195   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:43.786972   48339 type.go:168] "Request Body" body=""
	I1212 00:16:43.787061   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:43.787340   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:43.787388   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:44.287150   48339 type.go:168] "Request Body" body=""
	I1212 00:16:44.287228   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:44.287570   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:44.786168   48339 type.go:168] "Request Body" body=""
	I1212 00:16:44.786239   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:44.786580   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:45.286154   48339 type.go:168] "Request Body" body=""
	I1212 00:16:45.286221   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:45.286507   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:45.728277   48339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:16:45.786458   48339 type.go:168] "Request Body" body=""
	I1212 00:16:45.786536   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:45.786800   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:45.788347   48339 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:16:45.788381   48339 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 00:16:45.788458   48339 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 00:16:45.791789   48339 out.go:179] * Enabled addons: 
	I1212 00:16:45.795459   48339 addons.go:530] duration metric: took 1m33.015656607s for enable addons: enabled=[]
	I1212 00:16:46.287010   48339 type.go:168] "Request Body" body=""
	I1212 00:16:46.287081   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:46.287404   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:46.287462   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:46.786103   48339 type.go:168] "Request Body" body=""
	I1212 00:16:46.786175   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:46.786467   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:47.286190   48339 type.go:168] "Request Body" body=""
	I1212 00:16:47.286259   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:47.286575   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:47.786211   48339 type.go:168] "Request Body" body=""
	I1212 00:16:47.786307   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:47.786638   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:48.286474   48339 type.go:168] "Request Body" body=""
	I1212 00:16:48.286546   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:48.286806   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:48.786468   48339 type.go:168] "Request Body" body=""
	I1212 00:16:48.786549   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:48.786891   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:48.786943   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:49.286477   48339 type.go:168] "Request Body" body=""
	I1212 00:16:49.286551   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:49.286848   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:49.786149   48339 type.go:168] "Request Body" body=""
	I1212 00:16:49.786225   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:49.786558   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:50.286220   48339 type.go:168] "Request Body" body=""
	I1212 00:16:50.286298   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:50.286632   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:50.786373   48339 type.go:168] "Request Body" body=""
	I1212 00:16:50.786482   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:50.786811   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:51.287106   48339 type.go:168] "Request Body" body=""
	I1212 00:16:51.287186   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:51.287452   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:51.287504   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:51.786168   48339 type.go:168] "Request Body" body=""
	I1212 00:16:51.786246   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:51.786652   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:52.286230   48339 type.go:168] "Request Body" body=""
	I1212 00:16:52.286316   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:52.286605   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:52.786447   48339 type.go:168] "Request Body" body=""
	I1212 00:16:52.786524   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:52.786794   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:53.286795   48339 type.go:168] "Request Body" body=""
	I1212 00:16:53.286881   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:53.287250   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:53.786893   48339 type.go:168] "Request Body" body=""
	I1212 00:16:53.786965   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:53.787310   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:53.787368   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:54.287009   48339 type.go:168] "Request Body" body=""
	I1212 00:16:54.287074   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:54.287399   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:54.787136   48339 type.go:168] "Request Body" body=""
	I1212 00:16:54.787210   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:54.787556   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:55.286167   48339 type.go:168] "Request Body" body=""
	I1212 00:16:55.286259   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:55.286627   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:55.787083   48339 type.go:168] "Request Body" body=""
	I1212 00:16:55.787159   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:55.787400   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:55.787438   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:56.286110   48339 type.go:168] "Request Body" body=""
	I1212 00:16:56.286192   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:56.286559   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:56.786120   48339 type.go:168] "Request Body" body=""
	I1212 00:16:56.786194   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:56.786507   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:57.286159   48339 type.go:168] "Request Body" body=""
	I1212 00:16:57.286235   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:57.286541   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:57.786199   48339 type.go:168] "Request Body" body=""
	I1212 00:16:57.786273   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:57.786608   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:58.286384   48339 type.go:168] "Request Body" body=""
	I1212 00:16:58.286456   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:58.286786   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:16:58.286842   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:16:58.786113   48339 type.go:168] "Request Body" body=""
	I1212 00:16:58.786195   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:58.786436   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:59.286107   48339 type.go:168] "Request Body" body=""
	I1212 00:16:59.286184   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:59.286539   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:16:59.786120   48339 type.go:168] "Request Body" body=""
	I1212 00:16:59.786208   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:16:59.786557   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:00.301305   48339 type.go:168] "Request Body" body=""
	I1212 00:17:00.301394   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:00.301705   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:00.301755   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:00.786915   48339 type.go:168] "Request Body" body=""
	I1212 00:17:00.787023   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:00.787365   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:01.286116   48339 type.go:168] "Request Body" body=""
	I1212 00:17:01.286201   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:01.286498   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:01.786331   48339 type.go:168] "Request Body" body=""
	I1212 00:17:01.786455   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:01.787063   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:02.286594   48339 type.go:168] "Request Body" body=""
	I1212 00:17:02.286683   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:02.287073   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:02.786476   48339 type.go:168] "Request Body" body=""
	I1212 00:17:02.786554   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:02.786843   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:02.786901   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:03.286851   48339 type.go:168] "Request Body" body=""
	I1212 00:17:03.286949   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:03.287380   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:03.787098   48339 type.go:168] "Request Body" body=""
	I1212 00:17:03.787174   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:03.787557   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:04.286231   48339 type.go:168] "Request Body" body=""
	I1212 00:17:04.286326   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:04.286645   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:04.786397   48339 type.go:168] "Request Body" body=""
	I1212 00:17:04.786491   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:04.786849   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:05.286553   48339 type.go:168] "Request Body" body=""
	I1212 00:17:05.286637   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:05.286984   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:05.287068   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:05.786188   48339 type.go:168] "Request Body" body=""
	I1212 00:17:05.786271   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:05.786551   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:06.286285   48339 type.go:168] "Request Body" body=""
	I1212 00:17:06.286367   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:06.286754   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:06.786511   48339 type.go:168] "Request Body" body=""
	I1212 00:17:06.786601   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:06.786964   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:07.286708   48339 type.go:168] "Request Body" body=""
	I1212 00:17:07.286779   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:07.287068   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:07.287118   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:07.786821   48339 type.go:168] "Request Body" body=""
	I1212 00:17:07.786901   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:07.787214   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:08.286844   48339 type.go:168] "Request Body" body=""
	I1212 00:17:08.286917   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:08.288380   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1212 00:17:08.786934   48339 type.go:168] "Request Body" body=""
	I1212 00:17:08.787026   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:08.787269   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:09.287048   48339 type.go:168] "Request Body" body=""
	I1212 00:17:09.287121   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:09.287442   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:09.287495   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:09.786147   48339 type.go:168] "Request Body" body=""
	I1212 00:17:09.786230   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:09.786586   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:10.286883   48339 type.go:168] "Request Body" body=""
	I1212 00:17:10.286956   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:10.287243   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:10.786958   48339 type.go:168] "Request Body" body=""
	I1212 00:17:10.787045   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:10.787380   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:11.287044   48339 type.go:168] "Request Body" body=""
	I1212 00:17:11.287119   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:11.287444   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:11.786125   48339 type.go:168] "Request Body" body=""
	I1212 00:17:11.786193   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:11.786444   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:11.786489   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:12.286149   48339 type.go:168] "Request Body" body=""
	I1212 00:17:12.286229   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:12.286580   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:12.786352   48339 type.go:168] "Request Body" body=""
	I1212 00:17:12.786428   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:12.786688   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:13.286596   48339 type.go:168] "Request Body" body=""
	I1212 00:17:13.286663   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:13.286919   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:13.786173   48339 type.go:168] "Request Body" body=""
	I1212 00:17:13.786241   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:13.786564   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:13.786616   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:14.286270   48339 type.go:168] "Request Body" body=""
	I1212 00:17:14.286348   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:14.286675   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:14.786352   48339 type.go:168] "Request Body" body=""
	I1212 00:17:14.786428   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:14.786687   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:15.286227   48339 type.go:168] "Request Body" body=""
	I1212 00:17:15.286303   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:15.286628   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:15.786172   48339 type.go:168] "Request Body" body=""
	I1212 00:17:15.786249   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:15.786573   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:16.286101   48339 type.go:168] "Request Body" body=""
	I1212 00:17:16.286166   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:16.286405   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:16.286442   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:16.786141   48339 type.go:168] "Request Body" body=""
	I1212 00:17:16.786209   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:16.786499   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:17.286249   48339 type.go:168] "Request Body" body=""
	I1212 00:17:17.286330   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:17.286684   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:17.786983   48339 type.go:168] "Request Body" body=""
	I1212 00:17:17.787073   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:17.787361   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:18.286161   48339 type.go:168] "Request Body" body=""
	I1212 00:17:18.286235   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:18.286595   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:18.286655   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:18.786181   48339 type.go:168] "Request Body" body=""
	I1212 00:17:18.786265   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:18.786618   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:19.286155   48339 type.go:168] "Request Body" body=""
	I1212 00:17:19.286234   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:19.286527   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:19.786171   48339 type.go:168] "Request Body" body=""
	I1212 00:17:19.786268   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:19.786580   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:20.286265   48339 type.go:168] "Request Body" body=""
	I1212 00:17:20.286345   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:20.286667   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:20.286729   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:20.786253   48339 type.go:168] "Request Body" body=""
	I1212 00:17:20.786335   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:20.786585   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:21.286248   48339 type.go:168] "Request Body" body=""
	I1212 00:17:21.286349   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:21.286645   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:21.786360   48339 type.go:168] "Request Body" body=""
	I1212 00:17:21.786432   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:21.786770   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:22.286447   48339 type.go:168] "Request Body" body=""
	I1212 00:17:22.286522   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:22.286821   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:22.286872   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:22.786477   48339 type.go:168] "Request Body" body=""
	I1212 00:17:22.786551   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:22.786870   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:23.286630   48339 type.go:168] "Request Body" body=""
	I1212 00:17:23.286708   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:23.287045   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:23.786799   48339 type.go:168] "Request Body" body=""
	I1212 00:17:23.786866   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:23.787137   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:24.286984   48339 type.go:168] "Request Body" body=""
	I1212 00:17:24.287110   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:24.287379   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:24.287422   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:24.787166   48339 type.go:168] "Request Body" body=""
	I1212 00:17:24.787236   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:24.787551   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:25.286131   48339 type.go:168] "Request Body" body=""
	I1212 00:17:25.286198   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:25.286515   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:25.786175   48339 type.go:168] "Request Body" body=""
	I1212 00:17:25.786258   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:25.786585   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:26.286285   48339 type.go:168] "Request Body" body=""
	I1212 00:17:26.286371   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:26.286713   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:26.786399   48339 type.go:168] "Request Body" body=""
	I1212 00:17:26.786473   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:26.786722   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:26.786769   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:27.286222   48339 type.go:168] "Request Body" body=""
	I1212 00:17:27.286300   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:27.286683   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:27.786241   48339 type.go:168] "Request Body" body=""
	I1212 00:17:27.786319   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:27.786666   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:28.286480   48339 type.go:168] "Request Body" body=""
	I1212 00:17:28.286553   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:28.286814   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:28.786177   48339 type.go:168] "Request Body" body=""
	I1212 00:17:28.786245   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:28.786593   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:29.286294   48339 type.go:168] "Request Body" body=""
	I1212 00:17:29.286373   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:29.286698   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:29.286749   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:29.787059   48339 type.go:168] "Request Body" body=""
	I1212 00:17:29.787135   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:29.787388   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:30.287153   48339 type.go:168] "Request Body" body=""
	I1212 00:17:30.287233   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:30.287571   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:30.786130   48339 type.go:168] "Request Body" body=""
	I1212 00:17:30.786208   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:30.786533   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:31.286233   48339 type.go:168] "Request Body" body=""
	I1212 00:17:31.286304   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:31.286552   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:31.786198   48339 type.go:168] "Request Body" body=""
	I1212 00:17:31.786272   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:31.786658   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:31.786711   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:32.286228   48339 type.go:168] "Request Body" body=""
	I1212 00:17:32.286302   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:32.286672   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:32.786431   48339 type.go:168] "Request Body" body=""
	I1212 00:17:32.786501   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:32.786749   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:33.286661   48339 type.go:168] "Request Body" body=""
	I1212 00:17:33.286739   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:33.287070   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:33.786835   48339 type.go:168] "Request Body" body=""
	I1212 00:17:33.786916   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:33.787267   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:33.787323   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:34.287052   48339 type.go:168] "Request Body" body=""
	I1212 00:17:34.287118   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:34.287368   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:34.787078   48339 type.go:168] "Request Body" body=""
	I1212 00:17:34.787151   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:34.787466   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:35.286168   48339 type.go:168] "Request Body" body=""
	I1212 00:17:35.286247   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:35.286575   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:35.787150   48339 type.go:168] "Request Body" body=""
	I1212 00:17:35.787215   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:35.787459   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:35.787500   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:36.286166   48339 type.go:168] "Request Body" body=""
	I1212 00:17:36.286238   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:36.286556   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:36.786086   48339 type.go:168] "Request Body" body=""
	I1212 00:17:36.786158   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:36.786493   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:37.287083   48339 type.go:168] "Request Body" body=""
	I1212 00:17:37.287149   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:37.287400   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:37.786113   48339 type.go:168] "Request Body" body=""
	I1212 00:17:37.786187   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:37.786493   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:38.286449   48339 type.go:168] "Request Body" body=""
	I1212 00:17:38.286532   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:38.286863   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:38.286918   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:38.786428   48339 type.go:168] "Request Body" body=""
	I1212 00:17:38.786493   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:38.786739   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:39.286231   48339 type.go:168] "Request Body" body=""
	I1212 00:17:39.286328   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:39.286669   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:39.786191   48339 type.go:168] "Request Body" body=""
	I1212 00:17:39.786261   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:39.786574   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:40.286097   48339 type.go:168] "Request Body" body=""
	I1212 00:17:40.286176   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:40.286475   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:40.786241   48339 type.go:168] "Request Body" body=""
	I1212 00:17:40.786314   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:40.786667   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:40.786722   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:41.286407   48339 type.go:168] "Request Body" body=""
	I1212 00:17:41.286483   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:41.286793   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:41.786385   48339 type.go:168] "Request Body" body=""
	I1212 00:17:41.786504   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:41.786782   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:42.286235   48339 type.go:168] "Request Body" body=""
	I1212 00:17:42.286314   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:42.286656   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:42.786517   48339 type.go:168] "Request Body" body=""
	I1212 00:17:42.786601   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:42.786955   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:42.787030   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:43.286740   48339 type.go:168] "Request Body" body=""
	I1212 00:17:43.286811   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:43.287101   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:43.786897   48339 type.go:168] "Request Body" body=""
	I1212 00:17:43.786970   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:43.787283   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:44.287080   48339 type.go:168] "Request Body" body=""
	I1212 00:17:44.287151   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:44.287449   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:44.786108   48339 type.go:168] "Request Body" body=""
	I1212 00:17:44.786188   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:44.786505   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:45.286236   48339 type.go:168] "Request Body" body=""
	I1212 00:17:45.286337   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:45.286642   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:45.286697   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:45.786185   48339 type.go:168] "Request Body" body=""
	I1212 00:17:45.786259   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:45.786591   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:46.286218   48339 type.go:168] "Request Body" body=""
	I1212 00:17:46.286326   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:46.286644   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:46.786217   48339 type.go:168] "Request Body" body=""
	I1212 00:17:46.786289   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:46.786616   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:47.286247   48339 type.go:168] "Request Body" body=""
	I1212 00:17:47.286340   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:47.286709   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:47.286768   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:47.786186   48339 type.go:168] "Request Body" body=""
	I1212 00:17:47.786263   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:47.786590   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:48.286576   48339 type.go:168] "Request Body" body=""
	I1212 00:17:48.286657   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:48.287040   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:48.786801   48339 type.go:168] "Request Body" body=""
	I1212 00:17:48.786875   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:48.787271   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:49.287049   48339 type.go:168] "Request Body" body=""
	I1212 00:17:49.287121   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:49.287376   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:49.287415   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:49.786455   48339 type.go:168] "Request Body" body=""
	I1212 00:17:49.786542   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:49.786946   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:50.286236   48339 type.go:168] "Request Body" body=""
	I1212 00:17:50.286337   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:50.286768   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:50.787077   48339 type.go:168] "Request Body" body=""
	I1212 00:17:50.787161   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:50.787441   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:51.286171   48339 type.go:168] "Request Body" body=""
	I1212 00:17:51.286244   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:51.286582   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:51.786669   48339 type.go:168] "Request Body" body=""
	I1212 00:17:51.786740   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:51.787072   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:51.787128   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:52.286721   48339 type.go:168] "Request Body" body=""
	I1212 00:17:52.286792   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:52.287074   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:52.787058   48339 type.go:168] "Request Body" body=""
	I1212 00:17:52.787135   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:52.787466   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:53.286395   48339 type.go:168] "Request Body" body=""
	I1212 00:17:53.286475   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:53.286789   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:53.786172   48339 type.go:168] "Request Body" body=""
	I1212 00:17:53.786242   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:53.786578   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:54.286214   48339 type.go:168] "Request Body" body=""
	I1212 00:17:54.286284   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:54.286634   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:54.286688   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:54.786336   48339 type.go:168] "Request Body" body=""
	I1212 00:17:54.786415   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:54.786747   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:55.287099   48339 type.go:168] "Request Body" body=""
	I1212 00:17:55.287165   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:55.287421   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:55.787184   48339 type.go:168] "Request Body" body=""
	I1212 00:17:55.787260   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:55.787579   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:56.286208   48339 type.go:168] "Request Body" body=""
	I1212 00:17:56.286283   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:56.286616   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:56.786874   48339 type.go:168] "Request Body" body=""
	I1212 00:17:56.786946   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:56.787207   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:56.787260   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:57.286785   48339 type.go:168] "Request Body" body=""
	I1212 00:17:57.286872   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:57.287249   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:57.786907   48339 type.go:168] "Request Body" body=""
	I1212 00:17:57.786979   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:57.787325   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:58.287084   48339 type.go:168] "Request Body" body=""
	I1212 00:17:58.287156   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:58.287408   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:58.787171   48339 type.go:168] "Request Body" body=""
	I1212 00:17:58.787247   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:58.787569   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:17:58.787624   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:17:59.286220   48339 type.go:168] "Request Body" body=""
	I1212 00:17:59.286295   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:59.286637   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:17:59.786160   48339 type.go:168] "Request Body" body=""
	I1212 00:17:59.786226   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:17:59.786481   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:00.286342   48339 type.go:168] "Request Body" body=""
	I1212 00:18:00.286424   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:00.286745   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:00.786411   48339 type.go:168] "Request Body" body=""
	I1212 00:18:00.786487   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:00.786799   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:01.286485   48339 type.go:168] "Request Body" body=""
	I1212 00:18:01.286554   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:01.286822   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:01.286864   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:01.786179   48339 type.go:168] "Request Body" body=""
	I1212 00:18:01.786249   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:01.786559   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:02.286272   48339 type.go:168] "Request Body" body=""
	I1212 00:18:02.286352   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:02.286681   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:02.786397   48339 type.go:168] "Request Body" body=""
	I1212 00:18:02.786473   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:02.786729   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:03.286684   48339 type.go:168] "Request Body" body=""
	I1212 00:18:03.286756   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:03.287062   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:03.287108   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:03.786757   48339 type.go:168] "Request Body" body=""
	I1212 00:18:03.786848   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:03.787220   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:04.286890   48339 type.go:168] "Request Body" body=""
	I1212 00:18:04.286971   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:04.287276   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:04.787022   48339 type.go:168] "Request Body" body=""
	I1212 00:18:04.787101   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:04.787413   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:05.286164   48339 type.go:168] "Request Body" body=""
	I1212 00:18:05.286245   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:05.286589   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:05.786893   48339 type.go:168] "Request Body" body=""
	I1212 00:18:05.786965   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:05.787232   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:05.787272   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:06.287096   48339 type.go:168] "Request Body" body=""
	I1212 00:18:06.287189   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:06.287596   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:06.786293   48339 type.go:168] "Request Body" body=""
	I1212 00:18:06.786366   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:06.786687   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:07.286878   48339 type.go:168] "Request Body" body=""
	I1212 00:18:07.286943   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:07.287205   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:07.786915   48339 type.go:168] "Request Body" body=""
	I1212 00:18:07.786985   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:07.787328   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:07.787380   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:08.286823   48339 type.go:168] "Request Body" body=""
	I1212 00:18:08.286912   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:08.287273   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:08.786885   48339 type.go:168] "Request Body" body=""
	I1212 00:18:08.786957   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:08.787238   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:09.286257   48339 type.go:168] "Request Body" body=""
	I1212 00:18:09.286349   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:09.286656   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:09.786355   48339 type.go:168] "Request Body" body=""
	I1212 00:18:09.786440   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:09.786773   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:10.286467   48339 type.go:168] "Request Body" body=""
	I1212 00:18:10.286571   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:10.286828   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:10.286869   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:10.786201   48339 type.go:168] "Request Body" body=""
	I1212 00:18:10.786280   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:10.786615   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:11.286318   48339 type.go:168] "Request Body" body=""
	I1212 00:18:11.286395   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:11.286719   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:11.786408   48339 type.go:168] "Request Body" body=""
	I1212 00:18:11.786479   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:11.786752   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:12.286228   48339 type.go:168] "Request Body" body=""
	I1212 00:18:12.286305   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:12.286693   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:12.786445   48339 type.go:168] "Request Body" body=""
	I1212 00:18:12.786529   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:12.786847   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:12.786901   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:13.286863   48339 type.go:168] "Request Body" body=""
	I1212 00:18:13.286936   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:13.287242   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:13.786941   48339 type.go:168] "Request Body" body=""
	I1212 00:18:13.787040   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:13.787410   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:14.287037   48339 type.go:168] "Request Body" body=""
	I1212 00:18:14.287114   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:14.287432   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:14.786139   48339 type.go:168] "Request Body" body=""
	I1212 00:18:14.786211   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:14.786471   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:15.286165   48339 type.go:168] "Request Body" body=""
	I1212 00:18:15.286243   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:15.286559   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:15.286619   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:15.786275   48339 type.go:168] "Request Body" body=""
	I1212 00:18:15.786355   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:15.786707   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:16.286358   48339 type.go:168] "Request Body" body=""
	I1212 00:18:16.286435   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:16.286754   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:16.786213   48339 type.go:168] "Request Body" body=""
	I1212 00:18:16.786285   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:16.786626   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:17.286235   48339 type.go:168] "Request Body" body=""
	I1212 00:18:17.286316   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:17.286711   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:17.286765   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:17.786210   48339 type.go:168] "Request Body" body=""
	I1212 00:18:17.786299   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:17.786594   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:18.286667   48339 type.go:168] "Request Body" body=""
	I1212 00:18:18.286745   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:18.287093   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:18.786871   48339 type.go:168] "Request Body" body=""
	I1212 00:18:18.786957   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:18.787347   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:19.287118   48339 type.go:168] "Request Body" body=""
	I1212 00:18:19.287189   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:19.287538   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:19.287598   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:19.786173   48339 type.go:168] "Request Body" body=""
	I1212 00:18:19.786250   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:19.786591   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:20.286290   48339 type.go:168] "Request Body" body=""
	I1212 00:18:20.286368   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:20.286732   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:20.786421   48339 type.go:168] "Request Body" body=""
	I1212 00:18:20.786496   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:20.786769   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:21.286229   48339 type.go:168] "Request Body" body=""
	I1212 00:18:21.286299   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:21.286631   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:21.786238   48339 type.go:168] "Request Body" body=""
	I1212 00:18:21.786325   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:21.786704   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:21.786756   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:22.286201   48339 type.go:168] "Request Body" body=""
	I1212 00:18:22.286267   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:22.286513   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:22.786439   48339 type.go:168] "Request Body" body=""
	I1212 00:18:22.786511   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:22.786820   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:23.286747   48339 type.go:168] "Request Body" body=""
	I1212 00:18:23.286828   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:23.287136   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:23.786886   48339 type.go:168] "Request Body" body=""
	I1212 00:18:23.786958   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:23.787219   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:23.787272   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:24.287069   48339 type.go:168] "Request Body" body=""
	I1212 00:18:24.287145   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:24.287464   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:24.787101   48339 type.go:168] "Request Body" body=""
	I1212 00:18:24.787205   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:24.787503   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:25.286157   48339 type.go:168] "Request Body" body=""
	I1212 00:18:25.286231   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:25.286484   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:25.786179   48339 type.go:168] "Request Body" body=""
	I1212 00:18:25.786250   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:25.786581   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:26.286254   48339 type.go:168] "Request Body" body=""
	I1212 00:18:26.286329   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:26.286638   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:26.286693   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:26.786132   48339 type.go:168] "Request Body" body=""
	I1212 00:18:26.786199   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:26.786452   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:27.286167   48339 type.go:168] "Request Body" body=""
	I1212 00:18:27.286240   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:27.286520   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:27.786148   48339 type.go:168] "Request Body" body=""
	I1212 00:18:27.786250   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:27.786565   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:28.286477   48339 type.go:168] "Request Body" body=""
	I1212 00:18:28.286544   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:28.286801   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:28.286842   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:28.786195   48339 type.go:168] "Request Body" body=""
	I1212 00:18:28.786269   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:28.786563   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:29.286151   48339 type.go:168] "Request Body" body=""
	I1212 00:18:29.286228   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:29.286541   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:29.786783   48339 type.go:168] "Request Body" body=""
	I1212 00:18:29.786859   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:29.787122   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:30.286875   48339 type.go:168] "Request Body" body=""
	I1212 00:18:30.286953   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:30.287291   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:30.287342   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:30.786921   48339 type.go:168] "Request Body" body=""
	I1212 00:18:30.787054   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:30.787386   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:31.287040   48339 type.go:168] "Request Body" body=""
	I1212 00:18:31.287113   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:31.287420   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:31.786111   48339 type.go:168] "Request Body" body=""
	I1212 00:18:31.786190   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:31.786534   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:32.286242   48339 type.go:168] "Request Body" body=""
	I1212 00:18:32.286317   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:32.286644   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:32.787099   48339 type.go:168] "Request Body" body=""
	I1212 00:18:32.787169   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:32.787444   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:32.787485   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:33.286455   48339 type.go:168] "Request Body" body=""
	I1212 00:18:33.286531   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:33.286867   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:33.786185   48339 type.go:168] "Request Body" body=""
	I1212 00:18:33.786263   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:33.786599   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:34.287030   48339 type.go:168] "Request Body" body=""
	I1212 00:18:34.287101   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:34.287356   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:34.787107   48339 type.go:168] "Request Body" body=""
	I1212 00:18:34.787178   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:34.787462   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:34.787506   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:35.287151   48339 type.go:168] "Request Body" body=""
	I1212 00:18:35.287227   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:35.287561   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:35.786156   48339 type.go:168] "Request Body" body=""
	I1212 00:18:35.786227   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:35.786476   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:36.286225   48339 type.go:168] "Request Body" body=""
	I1212 00:18:36.286302   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:36.286658   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:36.786364   48339 type.go:168] "Request Body" body=""
	I1212 00:18:36.786441   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:36.786776   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:37.287081   48339 type.go:168] "Request Body" body=""
	I1212 00:18:37.287160   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:37.287429   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:37.287479   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:37.786116   48339 type.go:168] "Request Body" body=""
	I1212 00:18:37.786189   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:37.786493   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:38.286438   48339 type.go:168] "Request Body" body=""
	I1212 00:18:38.286517   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:38.286835   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:38.786180   48339 type.go:168] "Request Body" body=""
	I1212 00:18:38.786274   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:38.786571   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:39.286206   48339 type.go:168] "Request Body" body=""
	I1212 00:18:39.286282   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:39.286612   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:39.786192   48339 type.go:168] "Request Body" body=""
	I1212 00:18:39.786279   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:39.786630   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:39.786682   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:40.286354   48339 type.go:168] "Request Body" body=""
	I1212 00:18:40.286444   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:40.286835   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:40.786204   48339 type.go:168] "Request Body" body=""
	I1212 00:18:40.786287   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:40.786601   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:41.286231   48339 type.go:168] "Request Body" body=""
	I1212 00:18:41.286307   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:41.286653   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:41.786899   48339 type.go:168] "Request Body" body=""
	I1212 00:18:41.787023   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:41.787291   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:41.787331   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:42.287103   48339 type.go:168] "Request Body" body=""
	I1212 00:18:42.287183   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:42.287534   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:42.786413   48339 type.go:168] "Request Body" body=""
	I1212 00:18:42.786496   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:42.786838   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:43.286712   48339 type.go:168] "Request Body" body=""
	I1212 00:18:43.286788   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:43.287076   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:43.786845   48339 type.go:168] "Request Body" body=""
	I1212 00:18:43.786921   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:43.787255   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:44.287058   48339 type.go:168] "Request Body" body=""
	I1212 00:18:44.287145   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:44.287474   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:44.287531   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:44.786152   48339 type.go:168] "Request Body" body=""
	I1212 00:18:44.786226   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:44.786558   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:45.286226   48339 type.go:168] "Request Body" body=""
	I1212 00:18:45.286300   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:45.286609   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:45.786194   48339 type.go:168] "Request Body" body=""
	I1212 00:18:45.786265   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:45.786613   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:46.287075   48339 type.go:168] "Request Body" body=""
	I1212 00:18:46.287143   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:46.287427   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:46.786109   48339 type.go:168] "Request Body" body=""
	I1212 00:18:46.786181   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:46.786497   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:46.786555   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:47.286231   48339 type.go:168] "Request Body" body=""
	I1212 00:18:47.286325   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:47.286637   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:47.786326   48339 type.go:168] "Request Body" body=""
	I1212 00:18:47.786398   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:47.786701   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:48.286663   48339 type.go:168] "Request Body" body=""
	I1212 00:18:48.286736   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:48.287070   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:48.786872   48339 type.go:168] "Request Body" body=""
	I1212 00:18:48.786951   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:48.787298   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:48.787351   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:49.287060   48339 type.go:168] "Request Body" body=""
	I1212 00:18:49.287138   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:49.287405   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:49.786097   48339 type.go:168] "Request Body" body=""
	I1212 00:18:49.786175   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:49.786470   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:50.286223   48339 type.go:168] "Request Body" body=""
	I1212 00:18:50.286298   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:50.286634   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:50.786914   48339 type.go:168] "Request Body" body=""
	I1212 00:18:50.786986   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:50.787320   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:50.787380   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:51.287127   48339 type.go:168] "Request Body" body=""
	I1212 00:18:51.287204   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:51.287530   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:51.786096   48339 type.go:168] "Request Body" body=""
	I1212 00:18:51.786170   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:51.786513   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:52.286949   48339 type.go:168] "Request Body" body=""
	I1212 00:18:52.287031   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:52.287290   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:52.786331   48339 type.go:168] "Request Body" body=""
	I1212 00:18:52.786411   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:52.786755   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:53.286620   48339 type.go:168] "Request Body" body=""
	I1212 00:18:53.286694   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:53.287034   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:53.287095   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:53.786805   48339 type.go:168] "Request Body" body=""
	I1212 00:18:53.786875   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:53.787154   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:54.286914   48339 type.go:168] "Request Body" body=""
	I1212 00:18:54.286986   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:54.287311   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:54.787067   48339 type.go:168] "Request Body" body=""
	I1212 00:18:54.787140   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:54.787481   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:55.287095   48339 type.go:168] "Request Body" body=""
	I1212 00:18:55.287162   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:55.287415   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:55.287454   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:55.786091   48339 type.go:168] "Request Body" body=""
	I1212 00:18:55.786159   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:55.786468   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:56.286160   48339 type.go:168] "Request Body" body=""
	I1212 00:18:56.286232   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:56.286551   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:56.786800   48339 type.go:168] "Request Body" body=""
	I1212 00:18:56.786866   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:56.787137   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:57.286892   48339 type.go:168] "Request Body" body=""
	I1212 00:18:57.286971   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:57.287328   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:57.787142   48339 type.go:168] "Request Body" body=""
	I1212 00:18:57.787233   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:57.787583   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:18:57.787634   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:18:58.286388   48339 type.go:168] "Request Body" body=""
	I1212 00:18:58.286461   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:58.286718   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:58.786377   48339 type.go:168] "Request Body" body=""
	I1212 00:18:58.786448   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:58.786805   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:59.286505   48339 type.go:168] "Request Body" body=""
	I1212 00:18:59.286587   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:59.286890   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:18:59.786144   48339 type.go:168] "Request Body" body=""
	I1212 00:18:59.786225   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:18:59.786592   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:00.286264   48339 type.go:168] "Request Body" body=""
	I1212 00:19:00.286345   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:00.286656   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:00.286735   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:00.786383   48339 type.go:168] "Request Body" body=""
	I1212 00:19:00.786458   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:00.786791   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:01.286179   48339 type.go:168] "Request Body" body=""
	I1212 00:19:01.286250   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:01.286584   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:01.786253   48339 type.go:168] "Request Body" body=""
	I1212 00:19:01.786329   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:01.786671   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:02.286241   48339 type.go:168] "Request Body" body=""
	I1212 00:19:02.286317   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:02.286672   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:02.786408   48339 type.go:168] "Request Body" body=""
	I1212 00:19:02.786476   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:02.786723   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:02.786763   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:03.286700   48339 type.go:168] "Request Body" body=""
	I1212 00:19:03.286795   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:03.287188   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:03.787019   48339 type.go:168] "Request Body" body=""
	I1212 00:19:03.787097   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:03.787433   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:04.286096   48339 type.go:168] "Request Body" body=""
	I1212 00:19:04.286175   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:04.286490   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:04.786196   48339 type.go:168] "Request Body" body=""
	I1212 00:19:04.786274   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:04.786601   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:05.286296   48339 type.go:168] "Request Body" body=""
	I1212 00:19:05.286371   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:05.286696   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:05.286753   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:05.786175   48339 type.go:168] "Request Body" body=""
	I1212 00:19:05.786254   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:05.786601   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:06.286229   48339 type.go:168] "Request Body" body=""
	I1212 00:19:06.286306   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:06.286600   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:06.786191   48339 type.go:168] "Request Body" body=""
	I1212 00:19:06.786264   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:06.786580   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:07.286119   48339 type.go:168] "Request Body" body=""
	I1212 00:19:07.286199   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:07.286473   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:07.786186   48339 type.go:168] "Request Body" body=""
	I1212 00:19:07.786259   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:07.786536   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:07.786581   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:08.286383   48339 type.go:168] "Request Body" body=""
	I1212 00:19:08.286463   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:08.286917   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:08.786174   48339 type.go:168] "Request Body" body=""
	I1212 00:19:08.786248   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:08.786551   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:09.286220   48339 type.go:168] "Request Body" body=""
	I1212 00:19:09.286299   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:09.286639   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:09.786169   48339 type.go:168] "Request Body" body=""
	I1212 00:19:09.786240   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:09.786540   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:10.286846   48339 type.go:168] "Request Body" body=""
	I1212 00:19:10.286915   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:10.287189   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:10.287228   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:10.787020   48339 type.go:168] "Request Body" body=""
	I1212 00:19:10.787096   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:10.787416   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:11.286118   48339 type.go:168] "Request Body" body=""
	I1212 00:19:11.286193   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:11.286517   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:11.786150   48339 type.go:168] "Request Body" body=""
	I1212 00:19:11.786231   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:11.786516   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:12.286197   48339 type.go:168] "Request Body" body=""
	I1212 00:19:12.286271   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:12.286598   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:12.786359   48339 type.go:168] "Request Body" body=""
	I1212 00:19:12.786434   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:12.786739   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:12.786787   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:13.286561   48339 type.go:168] "Request Body" body=""
	I1212 00:19:13.286637   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:13.286885   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:13.786215   48339 type.go:168] "Request Body" body=""
	I1212 00:19:13.786291   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:13.786637   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:14.286215   48339 type.go:168] "Request Body" body=""
	I1212 00:19:14.286287   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:14.286589   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:14.786851   48339 type.go:168] "Request Body" body=""
	I1212 00:19:14.786918   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:14.787262   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:14.787320   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:15.287090   48339 type.go:168] "Request Body" body=""
	I1212 00:19:15.287165   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:15.287490   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:15.786172   48339 type.go:168] "Request Body" body=""
	I1212 00:19:15.786269   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:15.786579   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:16.286136   48339 type.go:168] "Request Body" body=""
	I1212 00:19:16.286210   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:16.286453   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:16.786222   48339 type.go:168] "Request Body" body=""
	I1212 00:19:16.786299   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:16.786659   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:17.286375   48339 type.go:168] "Request Body" body=""
	I1212 00:19:17.286453   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:17.286795   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:17.286857   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:17.786163   48339 type.go:168] "Request Body" body=""
	I1212 00:19:17.786237   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:17.786560   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:18.286451   48339 type.go:168] "Request Body" body=""
	I1212 00:19:18.286531   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:18.286856   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:18.786177   48339 type.go:168] "Request Body" body=""
	I1212 00:19:18.786251   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:18.786557   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:19.286160   48339 type.go:168] "Request Body" body=""
	I1212 00:19:19.286232   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:19.286485   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:19.786192   48339 type.go:168] "Request Body" body=""
	I1212 00:19:19.786263   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:19.786567   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:19.786614   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:20.286287   48339 type.go:168] "Request Body" body=""
	I1212 00:19:20.286370   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:20.286718   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:20.787029   48339 type.go:168] "Request Body" body=""
	I1212 00:19:20.787097   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:20.787342   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:21.287119   48339 type.go:168] "Request Body" body=""
	I1212 00:19:21.287198   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:21.287505   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:21.786177   48339 type.go:168] "Request Body" body=""
	I1212 00:19:21.786266   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:21.786579   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:22.287046   48339 type.go:168] "Request Body" body=""
	I1212 00:19:22.287111   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:22.287377   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:22.287420   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:22.786275   48339 type.go:168] "Request Body" body=""
	I1212 00:19:22.786343   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:22.786646   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:23.286598   48339 type.go:168] "Request Body" body=""
	I1212 00:19:23.286692   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:23.287042   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:23.786834   48339 type.go:168] "Request Body" body=""
	I1212 00:19:23.786913   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:23.787199   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:24.286916   48339 type.go:168] "Request Body" body=""
	I1212 00:19:24.287018   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:24.287331   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:24.787102   48339 type.go:168] "Request Body" body=""
	I1212 00:19:24.787174   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:24.787525   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:24.787578   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:25.286170   48339 type.go:168] "Request Body" body=""
	I1212 00:19:25.286246   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:25.286510   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:25.787014   48339 type.go:168] "Request Body" body=""
	I1212 00:19:25.787086   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:25.787411   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:26.286133   48339 type.go:168] "Request Body" body=""
	I1212 00:19:26.286205   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:26.286533   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:26.786125   48339 type.go:168] "Request Body" body=""
	I1212 00:19:26.786195   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:26.786499   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:27.286222   48339 type.go:168] "Request Body" body=""
	I1212 00:19:27.286294   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:27.286619   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:27.286677   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:27.786180   48339 type.go:168] "Request Body" body=""
	I1212 00:19:27.786252   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:27.786586   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:28.286372   48339 type.go:168] "Request Body" body=""
	I1212 00:19:28.286448   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:28.286700   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:28.786195   48339 type.go:168] "Request Body" body=""
	I1212 00:19:28.786271   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:28.786605   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:29.286191   48339 type.go:168] "Request Body" body=""
	I1212 00:19:29.286267   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:29.286615   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:29.786910   48339 type.go:168] "Request Body" body=""
	I1212 00:19:29.786981   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:29.787247   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:29.787287   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:30.287073   48339 type.go:168] "Request Body" body=""
	I1212 00:19:30.287154   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:30.287499   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:30.786184   48339 type.go:168] "Request Body" body=""
	I1212 00:19:30.786259   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:30.786602   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:31.286871   48339 type.go:168] "Request Body" body=""
	I1212 00:19:31.286942   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:31.287207   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:31.786942   48339 type.go:168] "Request Body" body=""
	I1212 00:19:31.787038   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:31.787334   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:31.787377   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:32.287019   48339 type.go:168] "Request Body" body=""
	I1212 00:19:32.287094   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:32.287431   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:32.786241   48339 type.go:168] "Request Body" body=""
	I1212 00:19:32.786308   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:32.786562   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:33.286586   48339 type.go:168] "Request Body" body=""
	I1212 00:19:33.286669   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:33.287081   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:33.786842   48339 type.go:168] "Request Body" body=""
	I1212 00:19:33.786915   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:33.787232   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:34.286965   48339 type.go:168] "Request Body" body=""
	I1212 00:19:34.287052   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:34.287321   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:34.287371   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:34.787103   48339 type.go:168] "Request Body" body=""
	I1212 00:19:34.787184   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:34.787507   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:35.286199   48339 type.go:168] "Request Body" body=""
	I1212 00:19:35.286275   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:35.286623   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:35.786303   48339 type.go:168] "Request Body" body=""
	I1212 00:19:35.786378   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:35.786633   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:36.286201   48339 type.go:168] "Request Body" body=""
	I1212 00:19:36.286276   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:36.286623   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:36.786173   48339 type.go:168] "Request Body" body=""
	I1212 00:19:36.786245   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:36.786551   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:36.786609   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:37.286157   48339 type.go:168] "Request Body" body=""
	I1212 00:19:37.286229   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:37.286482   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:37.786158   48339 type.go:168] "Request Body" body=""
	I1212 00:19:37.786237   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:37.786552   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:38.286494   48339 type.go:168] "Request Body" body=""
	I1212 00:19:38.286574   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:38.286901   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:38.786471   48339 type.go:168] "Request Body" body=""
	I1212 00:19:38.786543   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:38.786828   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:38.786871   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:39.286234   48339 type.go:168] "Request Body" body=""
	I1212 00:19:39.286306   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:39.286633   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:39.786191   48339 type.go:168] "Request Body" body=""
	I1212 00:19:39.786273   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:39.786591   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:40.286174   48339 type.go:168] "Request Body" body=""
	I1212 00:19:40.286246   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:40.286501   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:40.786212   48339 type.go:168] "Request Body" body=""
	I1212 00:19:40.786284   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:40.786618   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:41.286308   48339 type.go:168] "Request Body" body=""
	I1212 00:19:41.286385   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:41.286717   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:41.286778   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:41.786259   48339 type.go:168] "Request Body" body=""
	I1212 00:19:41.786336   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:41.786616   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:42.286266   48339 type.go:168] "Request Body" body=""
	I1212 00:19:42.286426   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:42.286836   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:42.786557   48339 type.go:168] "Request Body" body=""
	I1212 00:19:42.786636   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:42.786968   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:43.286831   48339 type.go:168] "Request Body" body=""
	I1212 00:19:43.286907   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:43.287195   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:43.287247   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:43.786980   48339 type.go:168] "Request Body" body=""
	I1212 00:19:43.787071   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:43.787383   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:44.286097   48339 type.go:168] "Request Body" body=""
	I1212 00:19:44.286182   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:44.286516   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:44.787095   48339 type.go:168] "Request Body" body=""
	I1212 00:19:44.787170   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:44.787420   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:45.286197   48339 type.go:168] "Request Body" body=""
	I1212 00:19:45.286315   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:45.286686   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:45.786212   48339 type.go:168] "Request Body" body=""
	I1212 00:19:45.786292   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:45.786616   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:45.786667   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:46.286316   48339 type.go:168] "Request Body" body=""
	I1212 00:19:46.286391   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:46.286672   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:46.786182   48339 type.go:168] "Request Body" body=""
	I1212 00:19:46.786255   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:46.786571   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:47.286220   48339 type.go:168] "Request Body" body=""
	I1212 00:19:47.286293   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:47.286639   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:47.787075   48339 type.go:168] "Request Body" body=""
	I1212 00:19:47.787141   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:47.787388   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:47.787425   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:48.286333   48339 type.go:168] "Request Body" body=""
	I1212 00:19:48.286406   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:48.286742   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:48.786260   48339 type.go:168] "Request Body" body=""
	I1212 00:19:48.786335   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:48.786670   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:49.286373   48339 type.go:168] "Request Body" body=""
	I1212 00:19:49.286448   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:49.286721   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:49.786393   48339 type.go:168] "Request Body" body=""
	I1212 00:19:49.786466   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:49.786793   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:50.286556   48339 type.go:168] "Request Body" body=""
	I1212 00:19:50.286645   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:50.286977   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:50.287046   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:50.786244   48339 type.go:168] "Request Body" body=""
	I1212 00:19:50.786323   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:50.786639   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:51.286207   48339 type.go:168] "Request Body" body=""
	I1212 00:19:51.286281   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:51.286646   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:51.786236   48339 type.go:168] "Request Body" body=""
	I1212 00:19:51.786326   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:51.786698   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:52.286383   48339 type.go:168] "Request Body" body=""
	I1212 00:19:52.286453   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:52.286705   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:52.786430   48339 type.go:168] "Request Body" body=""
	I1212 00:19:52.786502   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:52.786808   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:52.786864   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:53.286726   48339 type.go:168] "Request Body" body=""
	I1212 00:19:53.286799   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:53.287127   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:53.786892   48339 type.go:168] "Request Body" body=""
	I1212 00:19:53.786963   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:53.787281   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:54.287089   48339 type.go:168] "Request Body" body=""
	I1212 00:19:54.287161   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:54.287510   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:54.787071   48339 type.go:168] "Request Body" body=""
	I1212 00:19:54.787148   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:54.787473   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:54.787523   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:55.287042   48339 type.go:168] "Request Body" body=""
	I1212 00:19:55.287120   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:55.287397   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:55.786097   48339 type.go:168] "Request Body" body=""
	I1212 00:19:55.786167   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:55.786471   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:56.286180   48339 type.go:168] "Request Body" body=""
	I1212 00:19:56.286255   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:56.286560   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:56.786731   48339 type.go:168] "Request Body" body=""
	I1212 00:19:56.786834   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:56.787097   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:57.286916   48339 type.go:168] "Request Body" body=""
	I1212 00:19:57.287011   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:57.287338   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:57.287392   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:19:57.787113   48339 type.go:168] "Request Body" body=""
	I1212 00:19:57.787195   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:57.787542   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:58.286383   48339 type.go:168] "Request Body" body=""
	I1212 00:19:58.286455   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:58.286708   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:58.786168   48339 type.go:168] "Request Body" body=""
	I1212 00:19:58.786245   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:58.786576   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:59.286179   48339 type.go:168] "Request Body" body=""
	I1212 00:19:59.286256   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:59.286592   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:19:59.786275   48339 type.go:168] "Request Body" body=""
	I1212 00:19:59.786344   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:19:59.786595   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:19:59.786633   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:00.286342   48339 type.go:168] "Request Body" body=""
	I1212 00:20:00.286436   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:00.286738   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:00.786599   48339 type.go:168] "Request Body" body=""
	I1212 00:20:00.786680   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:00.787175   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:01.286983   48339 type.go:168] "Request Body" body=""
	I1212 00:20:01.287070   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:01.287375   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:01.787109   48339 type.go:168] "Request Body" body=""
	I1212 00:20:01.787182   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:01.787524   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:01.787578   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:02.286136   48339 type.go:168] "Request Body" body=""
	I1212 00:20:02.286214   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:02.286541   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:02.786447   48339 type.go:168] "Request Body" body=""
	I1212 00:20:02.786522   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:02.786791   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:03.286727   48339 type.go:168] "Request Body" body=""
	I1212 00:20:03.286808   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:03.287147   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:03.786954   48339 type.go:168] "Request Body" body=""
	I1212 00:20:03.787051   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:03.787411   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:04.287105   48339 type.go:168] "Request Body" body=""
	I1212 00:20:04.287184   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:04.287440   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:04.287480   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:04.786199   48339 type.go:168] "Request Body" body=""
	I1212 00:20:04.786275   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:04.786621   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:05.286300   48339 type.go:168] "Request Body" body=""
	I1212 00:20:05.286378   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:05.286699   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:05.786189   48339 type.go:168] "Request Body" body=""
	I1212 00:20:05.786286   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:05.786574   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:06.286216   48339 type.go:168] "Request Body" body=""
	I1212 00:20:06.286291   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:06.286653   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:06.786351   48339 type.go:168] "Request Body" body=""
	I1212 00:20:06.786425   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:06.786777   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:06.786833   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:07.286483   48339 type.go:168] "Request Body" body=""
	I1212 00:20:07.286562   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:07.286815   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:07.786485   48339 type.go:168] "Request Body" body=""
	I1212 00:20:07.786559   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:07.786920   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:08.286761   48339 type.go:168] "Request Body" body=""
	I1212 00:20:08.286836   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:08.287188   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:08.786941   48339 type.go:168] "Request Body" body=""
	I1212 00:20:08.787029   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:08.787324   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:08.787386   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:09.287127   48339 type.go:168] "Request Body" body=""
	I1212 00:20:09.287201   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:09.287579   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:09.786165   48339 type.go:168] "Request Body" body=""
	I1212 00:20:09.786264   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:09.786669   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:10.286348   48339 type.go:168] "Request Body" body=""
	I1212 00:20:10.286420   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:10.286711   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:10.786398   48339 type.go:168] "Request Body" body=""
	I1212 00:20:10.786476   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:10.786785   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:11.286171   48339 type.go:168] "Request Body" body=""
	I1212 00:20:11.286251   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:11.286562   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:11.286616   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:11.786160   48339 type.go:168] "Request Body" body=""
	I1212 00:20:11.786237   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:11.786560   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:12.286232   48339 type.go:168] "Request Body" body=""
	I1212 00:20:12.286313   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:12.286631   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:12.786525   48339 type.go:168] "Request Body" body=""
	I1212 00:20:12.786596   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:12.786927   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:13.286693   48339 type.go:168] "Request Body" body=""
	I1212 00:20:13.286759   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:13.287036   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:13.287076   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:13.786823   48339 type.go:168] "Request Body" body=""
	I1212 00:20:13.786903   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:13.787250   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:14.287106   48339 type.go:168] "Request Body" body=""
	I1212 00:20:14.287193   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:14.287515   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:14.786201   48339 type.go:168] "Request Body" body=""
	I1212 00:20:14.786277   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:14.786598   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:15.286226   48339 type.go:168] "Request Body" body=""
	I1212 00:20:15.286303   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:15.286675   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:15.786377   48339 type.go:168] "Request Body" body=""
	I1212 00:20:15.786454   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:15.786771   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:15.786825   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:16.286146   48339 type.go:168] "Request Body" body=""
	I1212 00:20:16.286230   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:16.286475   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:16.786183   48339 type.go:168] "Request Body" body=""
	I1212 00:20:16.786256   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:16.786581   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:17.286264   48339 type.go:168] "Request Body" body=""
	I1212 00:20:17.286366   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:17.286686   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:17.786376   48339 type.go:168] "Request Body" body=""
	I1212 00:20:17.786459   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:17.786714   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:18.286808   48339 type.go:168] "Request Body" body=""
	I1212 00:20:18.286881   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:18.287211   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:18.287257   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:18.787025   48339 type.go:168] "Request Body" body=""
	I1212 00:20:18.787098   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:18.787407   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:19.286245   48339 type.go:168] "Request Body" body=""
	I1212 00:20:19.286455   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:19.287173   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:19.786138   48339 type.go:168] "Request Body" body=""
	I1212 00:20:19.786225   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:19.786578   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:20.286250   48339 type.go:168] "Request Body" body=""
	I1212 00:20:20.286325   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:20.286634   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:20.786202   48339 type.go:168] "Request Body" body=""
	I1212 00:20:20.786280   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:20.786538   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:20.786583   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:21.286166   48339 type.go:168] "Request Body" body=""
	I1212 00:20:21.286242   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:21.286538   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:21.786130   48339 type.go:168] "Request Body" body=""
	I1212 00:20:21.786205   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:21.786517   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:22.287092   48339 type.go:168] "Request Body" body=""
	I1212 00:20:22.287164   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:22.287411   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:22.786375   48339 type.go:168] "Request Body" body=""
	I1212 00:20:22.786456   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:22.786778   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:22.786829   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:23.286658   48339 type.go:168] "Request Body" body=""
	I1212 00:20:23.286731   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:23.287085   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:23.786836   48339 type.go:168] "Request Body" body=""
	I1212 00:20:23.786908   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:23.787187   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:24.286964   48339 type.go:168] "Request Body" body=""
	I1212 00:20:24.287062   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:24.287428   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:24.786115   48339 type.go:168] "Request Body" body=""
	I1212 00:20:24.786188   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:24.786524   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:25.286237   48339 type.go:168] "Request Body" body=""
	I1212 00:20:25.286345   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:25.286768   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:25.286850   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:25.786511   48339 type.go:168] "Request Body" body=""
	I1212 00:20:25.786607   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:25.786978   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:26.286790   48339 type.go:168] "Request Body" body=""
	I1212 00:20:26.286875   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:26.287221   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:26.786820   48339 type.go:168] "Request Body" body=""
	I1212 00:20:26.786891   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:26.787243   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:27.287025   48339 type.go:168] "Request Body" body=""
	I1212 00:20:27.287103   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:27.287476   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:27.287533   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:27.786181   48339 type.go:168] "Request Body" body=""
	I1212 00:20:27.786256   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:27.786579   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:28.286328   48339 type.go:168] "Request Body" body=""
	I1212 00:20:28.286403   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:28.286680   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:28.786377   48339 type.go:168] "Request Body" body=""
	I1212 00:20:28.786452   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:28.786763   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:29.286254   48339 type.go:168] "Request Body" body=""
	I1212 00:20:29.286331   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:29.286614   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:29.787081   48339 type.go:168] "Request Body" body=""
	I1212 00:20:29.787157   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:29.787430   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:29.787484   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:30.286195   48339 type.go:168] "Request Body" body=""
	I1212 00:20:30.286367   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:30.286726   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:30.786404   48339 type.go:168] "Request Body" body=""
	I1212 00:20:30.786481   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:30.786819   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:31.286538   48339 type.go:168] "Request Body" body=""
	I1212 00:20:31.286615   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:31.286953   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:31.786734   48339 type.go:168] "Request Body" body=""
	I1212 00:20:31.786823   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:31.787169   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:32.286853   48339 type.go:168] "Request Body" body=""
	I1212 00:20:32.286946   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:32.287277   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:32.287336   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:32.786355   48339 type.go:168] "Request Body" body=""
	I1212 00:20:32.786440   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:32.786710   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:33.286694   48339 type.go:168] "Request Body" body=""
	I1212 00:20:33.286774   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:33.287132   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:33.786905   48339 type.go:168] "Request Body" body=""
	I1212 00:20:33.786983   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:33.787332   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:34.287037   48339 type.go:168] "Request Body" body=""
	I1212 00:20:34.287105   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:34.287355   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:34.287394   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:34.787091   48339 type.go:168] "Request Body" body=""
	I1212 00:20:34.787167   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:34.787475   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:35.286183   48339 type.go:168] "Request Body" body=""
	I1212 00:20:35.286264   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:35.286585   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:35.786156   48339 type.go:168] "Request Body" body=""
	I1212 00:20:35.786225   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:35.786538   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:36.286227   48339 type.go:168] "Request Body" body=""
	I1212 00:20:36.286330   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:36.286653   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:36.786356   48339 type.go:168] "Request Body" body=""
	I1212 00:20:36.786434   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:36.786764   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:36.786818   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:37.286091   48339 type.go:168] "Request Body" body=""
	I1212 00:20:37.286166   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:37.286500   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:37.786184   48339 type.go:168] "Request Body" body=""
	I1212 00:20:37.786263   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:37.786572   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:38.286481   48339 type.go:168] "Request Body" body=""
	I1212 00:20:38.286552   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:38.286881   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:38.786443   48339 type.go:168] "Request Body" body=""
	I1212 00:20:38.786517   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:38.786773   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:39.286212   48339 type.go:168] "Request Body" body=""
	I1212 00:20:39.286290   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:39.286616   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:39.286667   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:39.786165   48339 type.go:168] "Request Body" body=""
	I1212 00:20:39.786242   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:39.786530   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:40.286181   48339 type.go:168] "Request Body" body=""
	I1212 00:20:40.286252   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:40.286503   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:40.786171   48339 type.go:168] "Request Body" body=""
	I1212 00:20:40.786243   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:40.786563   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:41.286127   48339 type.go:168] "Request Body" body=""
	I1212 00:20:41.286208   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:41.286529   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:41.787086   48339 type.go:168] "Request Body" body=""
	I1212 00:20:41.787155   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:41.787421   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:41.787466   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:42.286149   48339 type.go:168] "Request Body" body=""
	I1212 00:20:42.286244   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:42.286590   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:42.786361   48339 type.go:168] "Request Body" body=""
	I1212 00:20:42.786438   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:42.786779   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:43.286633   48339 type.go:168] "Request Body" body=""
	I1212 00:20:43.286702   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:43.286960   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:43.786722   48339 type.go:168] "Request Body" body=""
	I1212 00:20:43.786804   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:43.787206   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:44.286934   48339 type.go:168] "Request Body" body=""
	I1212 00:20:44.287023   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:44.287351   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:44.287409   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:44.786842   48339 type.go:168] "Request Body" body=""
	I1212 00:20:44.786917   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:44.787191   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:45.286977   48339 type.go:168] "Request Body" body=""
	I1212 00:20:45.287067   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:45.287390   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:45.787177   48339 type.go:168] "Request Body" body=""
	I1212 00:20:45.787257   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:45.787616   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:46.286986   48339 type.go:168] "Request Body" body=""
	I1212 00:20:46.287083   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:46.287348   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:46.787132   48339 type.go:168] "Request Body" body=""
	I1212 00:20:46.787205   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:46.787529   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:46.787585   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:47.286216   48339 type.go:168] "Request Body" body=""
	I1212 00:20:47.286289   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:47.286635   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:47.787077   48339 type.go:168] "Request Body" body=""
	I1212 00:20:47.787165   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:47.787464   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:48.286384   48339 type.go:168] "Request Body" body=""
	I1212 00:20:48.286461   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:48.286804   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:48.786191   48339 type.go:168] "Request Body" body=""
	I1212 00:20:48.786264   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:48.786586   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:49.286166   48339 type.go:168] "Request Body" body=""
	I1212 00:20:49.286240   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:49.286495   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:49.286545   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:49.786168   48339 type.go:168] "Request Body" body=""
	I1212 00:20:49.786246   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:49.786526   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:50.286237   48339 type.go:168] "Request Body" body=""
	I1212 00:20:50.286315   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:50.286678   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:50.786121   48339 type.go:168] "Request Body" body=""
	I1212 00:20:50.786187   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:50.786438   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:51.286121   48339 type.go:168] "Request Body" body=""
	I1212 00:20:51.286198   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:51.286527   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:51.286572   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:51.786155   48339 type.go:168] "Request Body" body=""
	I1212 00:20:51.786230   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:51.786579   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:52.286140   48339 type.go:168] "Request Body" body=""
	I1212 00:20:52.286212   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:52.286463   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:52.786341   48339 type.go:168] "Request Body" body=""
	I1212 00:20:52.786421   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:52.786710   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:53.286513   48339 type.go:168] "Request Body" body=""
	I1212 00:20:53.286636   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:53.286976   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:53.287052   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:53.786687   48339 type.go:168] "Request Body" body=""
	I1212 00:20:53.786760   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:53.787036   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:54.286863   48339 type.go:168] "Request Body" body=""
	I1212 00:20:54.286939   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:54.287249   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:54.787061   48339 type.go:168] "Request Body" body=""
	I1212 00:20:54.787143   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:54.787476   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:55.286970   48339 type.go:168] "Request Body" body=""
	I1212 00:20:55.287058   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:55.287308   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:55.287347   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:55.786924   48339 type.go:168] "Request Body" body=""
	I1212 00:20:55.787017   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:55.787330   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:56.287109   48339 type.go:168] "Request Body" body=""
	I1212 00:20:56.287182   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:56.287490   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:56.786897   48339 type.go:168] "Request Body" body=""
	I1212 00:20:56.786972   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:56.787241   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:57.287067   48339 type.go:168] "Request Body" body=""
	I1212 00:20:57.287145   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:57.287509   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:57.287566   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:20:57.786231   48339 type.go:168] "Request Body" body=""
	I1212 00:20:57.786303   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:57.786626   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:58.286503   48339 type.go:168] "Request Body" body=""
	I1212 00:20:58.286567   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:58.286819   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:58.786175   48339 type.go:168] "Request Body" body=""
	I1212 00:20:58.786249   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:58.786577   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:59.286221   48339 type.go:168] "Request Body" body=""
	I1212 00:20:59.286300   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:59.286643   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:20:59.786201   48339 type.go:168] "Request Body" body=""
	I1212 00:20:59.786272   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:20:59.786717   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:20:59.786766   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:21:00.286416   48339 type.go:168] "Request Body" body=""
	I1212 00:21:00.286498   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:00.286792   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:00.786190   48339 type.go:168] "Request Body" body=""
	I1212 00:21:00.786269   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:00.786582   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:01.286121   48339 type.go:168] "Request Body" body=""
	I1212 00:21:01.286194   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:01.286449   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:01.786199   48339 type.go:168] "Request Body" body=""
	I1212 00:21:01.786294   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:01.786641   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:02.286227   48339 type.go:168] "Request Body" body=""
	I1212 00:21:02.286306   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:02.286637   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:21:02.286688   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:21:02.786377   48339 type.go:168] "Request Body" body=""
	I1212 00:21:02.786458   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:02.786789   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:03.286595   48339 type.go:168] "Request Body" body=""
	I1212 00:21:03.286680   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:03.287072   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:03.786847   48339 type.go:168] "Request Body" body=""
	I1212 00:21:03.786925   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:03.787257   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:04.287036   48339 type.go:168] "Request Body" body=""
	I1212 00:21:04.287108   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:04.287431   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:21:04.287477   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:21:04.786103   48339 type.go:168] "Request Body" body=""
	I1212 00:21:04.786178   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:04.786510   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:05.286213   48339 type.go:168] "Request Body" body=""
	I1212 00:21:05.286293   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:05.286653   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:05.786170   48339 type.go:168] "Request Body" body=""
	I1212 00:21:05.786239   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:05.786497   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:06.286230   48339 type.go:168] "Request Body" body=""
	I1212 00:21:06.286305   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:06.286647   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:06.786360   48339 type.go:168] "Request Body" body=""
	I1212 00:21:06.786435   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:06.786771   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:21:06.786825   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:21:07.286459   48339 type.go:168] "Request Body" body=""
	I1212 00:21:07.286536   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:07.286784   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:07.786179   48339 type.go:168] "Request Body" body=""
	I1212 00:21:07.786260   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:07.786613   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:08.286429   48339 type.go:168] "Request Body" body=""
	I1212 00:21:08.286512   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:08.286882   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:08.786159   48339 type.go:168] "Request Body" body=""
	I1212 00:21:08.786230   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:08.791780   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	W1212 00:21:08.791841   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:21:09.286491   48339 type.go:168] "Request Body" body=""
	I1212 00:21:09.286564   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:09.286869   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:09.786199   48339 type.go:168] "Request Body" body=""
	I1212 00:21:09.786280   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:09.786589   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:10.286143   48339 type.go:168] "Request Body" body=""
	I1212 00:21:10.286219   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:10.286481   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:10.786184   48339 type.go:168] "Request Body" body=""
	I1212 00:21:10.786253   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:10.786584   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:11.286268   48339 type.go:168] "Request Body" body=""
	I1212 00:21:11.286353   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:11.286684   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:21:11.286736   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:21:11.786169   48339 type.go:168] "Request Body" body=""
	I1212 00:21:11.786241   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:11.786538   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:12.286254   48339 type.go:168] "Request Body" body=""
	I1212 00:21:12.286329   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:12.286629   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:12.786499   48339 type.go:168] "Request Body" body=""
	I1212 00:21:12.786576   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:12.786914   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 00:21:13.286651   48339 type.go:168] "Request Body" body=""
	I1212 00:21:13.286728   48339 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-767012" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 00:21:13.286985   48339 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 00:21:13.287050   48339 node_ready.go:55] error getting node "functional-767012" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-767012": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 00:21:13.786749   48339 type.go:168] "Request Body" body=""
	I1212 00:21:13.786806   48339 node_ready.go:38] duration metric: took 6m0.00081197s for node "functional-767012" to be "Ready" ...
	I1212 00:21:13.789905   48339 out.go:203] 
	W1212 00:21:13.792750   48339 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1212 00:21:13.792769   48339 out.go:285] * 
	W1212 00:21:13.794879   48339 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:21:13.797575   48339 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 12 00:21:21 functional-767012 containerd[5228]: time="2025-12-12T00:21:21.326584300Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 00:21:22 functional-767012 containerd[5228]: time="2025-12-12T00:21:22.426399597Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 12 00:21:22 functional-767012 containerd[5228]: time="2025-12-12T00:21:22.429099624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 12 00:21:22 functional-767012 containerd[5228]: time="2025-12-12T00:21:22.436695899Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 00:21:22 functional-767012 containerd[5228]: time="2025-12-12T00:21:22.437190344Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 00:21:23 functional-767012 containerd[5228]: time="2025-12-12T00:21:23.376992857Z" level=info msg="No images store for sha256:70ac7e50cd2bc79b9d4a21c7c3336a342085c395cbc060852cb7a1a27478be50"
	Dec 12 00:21:23 functional-767012 containerd[5228]: time="2025-12-12T00:21:23.379248081Z" level=info msg="ImageCreate event name:\"docker.io/library/minikube-local-cache-test:functional-767012\""
	Dec 12 00:21:23 functional-767012 containerd[5228]: time="2025-12-12T00:21:23.391823869Z" level=info msg="ImageCreate event name:\"sha256:2af6a2f60c44ae40a2b1bc226758dd0a3c3f1c0d22fd7d74035513945443e825\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 00:21:23 functional-767012 containerd[5228]: time="2025-12-12T00:21:23.392117509Z" level=info msg="ImageUpdate event name:\"docker.io/library/minikube-local-cache-test:functional-767012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 00:21:24 functional-767012 containerd[5228]: time="2025-12-12T00:21:24.146206177Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\""
	Dec 12 00:21:24 functional-767012 containerd[5228]: time="2025-12-12T00:21:24.148641398Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:latest\""
	Dec 12 00:21:24 functional-767012 containerd[5228]: time="2025-12-12T00:21:24.151623413Z" level=info msg="ImageDelete event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\""
	Dec 12 00:21:24 functional-767012 containerd[5228]: time="2025-12-12T00:21:24.162436159Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\" returns successfully"
	Dec 12 00:21:25 functional-767012 containerd[5228]: time="2025-12-12T00:21:25.097681168Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\""
	Dec 12 00:21:25 functional-767012 containerd[5228]: time="2025-12-12T00:21:25.100430566Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.1\""
	Dec 12 00:21:25 functional-767012 containerd[5228]: time="2025-12-12T00:21:25.102367887Z" level=info msg="ImageDelete event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\""
	Dec 12 00:21:25 functional-767012 containerd[5228]: time="2025-12-12T00:21:25.110297605Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\" returns successfully"
	Dec 12 00:21:25 functional-767012 containerd[5228]: time="2025-12-12T00:21:25.290393375Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 12 00:21:25 functional-767012 containerd[5228]: time="2025-12-12T00:21:25.292904978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 12 00:21:25 functional-767012 containerd[5228]: time="2025-12-12T00:21:25.303585981Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 00:21:25 functional-767012 containerd[5228]: time="2025-12-12T00:21:25.303967252Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 00:21:25 functional-767012 containerd[5228]: time="2025-12-12T00:21:25.424210519Z" level=info msg="No images store for sha256:3ac89611d5efd8eb74174b1f04c33b7e73b651cec35b5498caf0cfdd2efd7d48"
	Dec 12 00:21:25 functional-767012 containerd[5228]: time="2025-12-12T00:21:25.426335395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.1\""
	Dec 12 00:21:25 functional-767012 containerd[5228]: time="2025-12-12T00:21:25.434299796Z" level=info msg="ImageCreate event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 00:21:25 functional-767012 containerd[5228]: time="2025-12-12T00:21:25.434898537Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:21:29.483810    9331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:21:29.485132    9331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:21:29.485892    9331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:21:29.486691    9331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:21:29.488236    9331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec11 23:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014465] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.504479] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.038126] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.726220] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.947343] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 00:21:29 up  1:03,  0 user,  load average: 0.83, 0.41, 0.56
	Linux functional-767012 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 00:21:26 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:21:26 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 826.
	Dec 12 00:21:26 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:21:26 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:21:26 functional-767012 kubelet[9109]: E1212 00:21:26.844490    9109 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:21:26 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:21:26 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:21:27 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 827.
	Dec 12 00:21:27 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:21:27 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:21:27 functional-767012 kubelet[9204]: E1212 00:21:27.593990    9204 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:21:27 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:21:27 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:21:28 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 828.
	Dec 12 00:21:28 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:21:28 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:21:28 functional-767012 kubelet[9225]: E1212 00:21:28.355649    9225 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:21:28 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:21:28 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:21:29 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 829.
	Dec 12 00:21:29 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:21:29 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:21:29 functional-767012 kubelet[9246]: E1212 00:21:29.097946    9246 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:21:29 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:21:29 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-767012 -n functional-767012
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-767012 -n functional-767012: exit status 2 (379.087925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-767012" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (735.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-767012 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1212 00:23:41.615145    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:25:57.043019    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:27:20.111601    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:28:41.617135    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:30:57.043014    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:33:41.615516    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-767012 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m13.262876184s)

                                                
                                                
-- stdout --
	* [functional-767012] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22101
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-767012" primary control-plane node in "functional-767012" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001026811s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001134626s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001134626s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-767012 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m13.269642937s for "functional-767012" cluster.
I1212 00:33:43.766617    4290 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-767012
helpers_test.go:244: (dbg) docker inspect functional-767012:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e",
	        "Created": "2025-12-12T00:06:52.261765556Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42951,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T00:06:52.317917194Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/hostname",
	        "HostsPath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/hosts",
	        "LogPath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e-json.log",
	        "Name": "/functional-767012",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-767012:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-767012",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e",
	                "LowerDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70-init/diff:/var/lib/docker/overlay2/4c0e5370e4fd7b4e6c6a79620ef377d7d55826709cd277e0cfa49c6005af0314/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-767012",
	                "Source": "/var/lib/docker/volumes/functional-767012/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-767012",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-767012",
	                "name.minikube.sigs.k8s.io": "functional-767012",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e781257da3adf1d3284ab2a6de0168c3db7957f25a7e53d0015250294193762d",
	            "SandboxKey": "/var/run/docker/netns/e781257da3ad",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-767012": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:4d:78:ba:7d:83",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "83467cc4cb13818b98ec0d7cb5fc0064ea6eb2c8db4256a8a81330921aa2d9a4",
	                    "EndpointID": "b787b732d8d748776ceeb6e65fab51cc1e79758446bc85ac20043b35593fab12",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-767012",
	                        "6585a82fe5e6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-767012 -n functional-767012
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-767012 -n functional-767012: exit status 2 (335.670136ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-095481 image ls --format short --alsologtostderr                                                                                             │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image   │ functional-095481 image ls --format table --alsologtostderr                                                                                             │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image   │ functional-095481 image ls --format json --alsologtostderr                                                                                              │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ ssh     │ functional-095481 ssh pgrep buildkitd                                                                                                                   │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │                     │
	│ image   │ functional-095481 image build -t localhost/my-image:functional-095481 testdata/build --alsologtostderr                                                  │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image   │ functional-095481 image ls                                                                                                                              │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ delete  │ -p functional-095481                                                                                                                                    │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ start   │ -p functional-767012 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │                     │
	│ start   │ -p functional-767012 --alsologtostderr -v=8                                                                                                             │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:15 UTC │                     │
	│ cache   │ functional-767012 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ cache   │ functional-767012 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ cache   │ functional-767012 cache add registry.k8s.io/pause:latest                                                                                                │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ cache   │ functional-767012 cache add minikube-local-cache-test:functional-767012                                                                                 │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ cache   │ functional-767012 cache delete minikube-local-cache-test:functional-767012                                                                              │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ cache   │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ ssh     │ functional-767012 ssh sudo crictl images                                                                                                                │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ ssh     │ functional-767012 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ ssh     │ functional-767012 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │                     │
	│ cache   │ functional-767012 cache reload                                                                                                                          │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ ssh     │ functional-767012 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ kubectl │ functional-767012 kubectl -- --context functional-767012 get pods                                                                                       │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │                     │
	│ start   │ -p functional-767012 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:21:30
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:21:30.554245   54101 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:21:30.554345   54101 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:21:30.554348   54101 out.go:374] Setting ErrFile to fd 2...
	I1212 00:21:30.554353   54101 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:21:30.554677   54101 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	I1212 00:21:30.555164   54101 out.go:368] Setting JSON to false
	I1212 00:21:30.555965   54101 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3837,"bootTime":1765495054,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1212 00:21:30.556051   54101 start.go:143] virtualization:  
	I1212 00:21:30.559689   54101 out.go:179] * [functional-767012] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 00:21:30.562867   54101 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:21:30.562960   54101 notify.go:221] Checking for updates...
	I1212 00:21:30.566618   54101 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:21:30.569772   54101 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 00:21:30.572750   54101 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	I1212 00:21:30.576169   54101 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:21:30.579060   54101 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:21:30.582404   54101 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 00:21:30.582492   54101 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:21:30.621591   54101 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 00:21:30.621756   54101 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:21:30.683145   54101 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-12 00:21:30.674181767 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:21:30.683240   54101 docker.go:319] overlay module found
	I1212 00:21:30.688118   54101 out.go:179] * Using the docker driver based on existing profile
	I1212 00:21:30.690961   54101 start.go:309] selected driver: docker
	I1212 00:21:30.690971   54101 start.go:927] validating driver "docker" against &{Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:21:30.691125   54101 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:21:30.691237   54101 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:21:30.747846   54101 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-12 00:21:30.73816398 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:21:30.748230   54101 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:21:30.748252   54101 cni.go:84] Creating CNI manager for ""
	I1212 00:21:30.748298   54101 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 00:21:30.748340   54101 start.go:353] cluster config:
	{Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:21:30.751463   54101 out.go:179] * Starting "functional-767012" primary control-plane node in "functional-767012" cluster
	I1212 00:21:30.754231   54101 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1212 00:21:30.757160   54101 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 00:21:30.760119   54101 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 00:21:30.760160   54101 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1212 00:21:30.760168   54101 cache.go:65] Caching tarball of preloaded images
	I1212 00:21:30.760193   54101 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 00:21:30.760258   54101 preload.go:238] Found /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1212 00:21:30.760267   54101 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1212 00:21:30.760383   54101 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/config.json ...
	I1212 00:21:30.778906   54101 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 00:21:30.778917   54101 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 00:21:30.778938   54101 cache.go:243] Successfully downloaded all kic artifacts
	I1212 00:21:30.778968   54101 start.go:360] acquireMachinesLock for functional-767012: {Name:mk41cf89e93a3830367886ebbef2bb8f6e99e3f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:21:30.779070   54101 start.go:364] duration metric: took 80.115µs to acquireMachinesLock for "functional-767012"
	I1212 00:21:30.779088   54101 start.go:96] Skipping create...Using existing machine configuration
	I1212 00:21:30.779093   54101 fix.go:54] fixHost starting: 
	I1212 00:21:30.779346   54101 cli_runner.go:164] Run: docker container inspect functional-767012 --format={{.State.Status}}
	I1212 00:21:30.795901   54101 fix.go:112] recreateIfNeeded on functional-767012: state=Running err=<nil>
	W1212 00:21:30.795920   54101 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 00:21:30.799043   54101 out.go:252] * Updating the running docker "functional-767012" container ...
	I1212 00:21:30.799064   54101 machine.go:94] provisionDockerMachine start ...
	I1212 00:21:30.799139   54101 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:21:30.816214   54101 main.go:143] libmachine: Using SSH client type: native
	I1212 00:21:30.816539   54101 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 00:21:30.816545   54101 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 00:21:30.966929   54101 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-767012
	
	I1212 00:21:30.966943   54101 ubuntu.go:182] provisioning hostname "functional-767012"
	I1212 00:21:30.967026   54101 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:21:30.983921   54101 main.go:143] libmachine: Using SSH client type: native
	I1212 00:21:30.984212   54101 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 00:21:30.984220   54101 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-767012 && echo "functional-767012" | sudo tee /etc/hostname
	I1212 00:21:31.148238   54101 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-767012
	
	I1212 00:21:31.148339   54101 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:21:31.167090   54101 main.go:143] libmachine: Using SSH client type: native
	I1212 00:21:31.167393   54101 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 00:21:31.167407   54101 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-767012' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-767012/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-767012' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:21:31.315620   54101 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:21:31.315644   54101 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-2343/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-2343/.minikube}
	I1212 00:21:31.315665   54101 ubuntu.go:190] setting up certificates
	I1212 00:21:31.315680   54101 provision.go:84] configureAuth start
	I1212 00:21:31.315738   54101 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-767012
	I1212 00:21:31.348126   54101 provision.go:143] copyHostCerts
	I1212 00:21:31.348184   54101 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem, removing ...
	I1212 00:21:31.348191   54101 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem
	I1212 00:21:31.348265   54101 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem (1675 bytes)
	I1212 00:21:31.348353   54101 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem, removing ...
	I1212 00:21:31.348357   54101 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem
	I1212 00:21:31.348380   54101 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem (1082 bytes)
	I1212 00:21:31.348433   54101 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem, removing ...
	I1212 00:21:31.348436   54101 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem
	I1212 00:21:31.348457   54101 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem (1123 bytes)
	I1212 00:21:31.348500   54101 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem org=jenkins.functional-767012 san=[127.0.0.1 192.168.49.2 functional-767012 localhost minikube]
	I1212 00:21:31.571131   54101 provision.go:177] copyRemoteCerts
	I1212 00:21:31.571185   54101 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:21:31.571226   54101 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:21:31.588332   54101 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:21:31.690410   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 00:21:31.707240   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 00:21:31.724075   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:21:31.740524   54101 provision.go:87] duration metric: took 424.823605ms to configureAuth
	I1212 00:21:31.740541   54101 ubuntu.go:206] setting minikube options for container-runtime
	I1212 00:21:31.740761   54101 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 00:21:31.740771   54101 machine.go:97] duration metric: took 941.698571ms to provisionDockerMachine
	I1212 00:21:31.740778   54101 start.go:293] postStartSetup for "functional-767012" (driver="docker")
	I1212 00:21:31.740788   54101 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:21:31.740838   54101 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:21:31.740873   54101 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:21:31.758388   54101 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:21:31.866987   54101 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:21:31.870573   54101 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:21:31.870591   54101 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 00:21:31.870603   54101 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/addons for local assets ...
	I1212 00:21:31.870659   54101 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/files for local assets ...
	I1212 00:21:31.870732   54101 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem -> 42902.pem in /etc/ssl/certs
	I1212 00:21:31.870809   54101 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/test/nested/copy/4290/hosts -> hosts in /etc/test/nested/copy/4290
	I1212 00:21:31.870853   54101 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4290
	I1212 00:21:31.878221   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /etc/ssl/certs/42902.pem (1708 bytes)
	I1212 00:21:31.898601   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/test/nested/copy/4290/hosts --> /etc/test/nested/copy/4290/hosts (40 bytes)
	I1212 00:21:31.917863   54101 start.go:296] duration metric: took 177.070825ms for postStartSetup
	I1212 00:21:31.917948   54101 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:21:31.917994   54101 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:21:31.934865   54101 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:21:32.037797   54101 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:21:32.044535   54101 fix.go:56] duration metric: took 1.265435742s for fixHost
	I1212 00:21:32.044551   54101 start.go:83] releasing machines lock for "functional-767012", held for 1.265473363s
	I1212 00:21:32.044634   54101 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-767012
	I1212 00:21:32.063486   54101 ssh_runner.go:195] Run: cat /version.json
	I1212 00:21:32.063525   54101 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:21:32.063754   54101 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:21:32.063796   54101 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:21:32.082463   54101 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:21:32.110313   54101 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:21:32.198490   54101 ssh_runner.go:195] Run: systemctl --version
	I1212 00:21:32.295700   54101 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:21:32.300162   54101 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:21:32.300220   54101 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:21:32.308110   54101 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 00:21:32.308123   54101 start.go:496] detecting cgroup driver to use...
	I1212 00:21:32.308152   54101 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 00:21:32.308196   54101 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 00:21:32.324857   54101 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 00:21:32.337980   54101 docker.go:218] disabling cri-docker service (if available) ...
	I1212 00:21:32.338034   54101 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:21:32.353838   54101 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:21:32.367832   54101 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:21:32.501329   54101 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:21:32.628856   54101 docker.go:234] disabling docker service ...
	I1212 00:21:32.628933   54101 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:21:32.643664   54101 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:21:32.657070   54101 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:21:32.773509   54101 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:21:32.920829   54101 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:21:32.933710   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:21:32.947319   54101 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 00:21:32.956944   54101 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 00:21:32.966825   54101 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 00:21:32.966891   54101 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 00:21:32.976378   54101 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 00:21:32.985341   54101 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 00:21:32.995459   54101 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 00:21:33.011573   54101 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:21:33.020559   54101 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 00:21:33.029747   54101 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 00:21:33.038731   54101 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 00:21:33.048050   54101 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:21:33.056172   54101 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:21:33.063953   54101 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:21:33.190754   54101 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 00:21:33.330744   54101 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1212 00:21:33.330802   54101 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1212 00:21:33.334307   54101 start.go:564] Will wait 60s for crictl version
	I1212 00:21:33.334373   54101 ssh_runner.go:195] Run: which crictl
	I1212 00:21:33.337855   54101 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 00:21:33.361388   54101 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1212 00:21:33.361444   54101 ssh_runner.go:195] Run: containerd --version
	I1212 00:21:33.383087   54101 ssh_runner.go:195] Run: containerd --version
	I1212 00:21:33.409485   54101 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1212 00:21:33.412580   54101 cli_runner.go:164] Run: docker network inspect functional-767012 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:21:33.429552   54101 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 00:21:33.436766   54101 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1212 00:21:33.439631   54101 kubeadm.go:884] updating cluster {Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:21:33.439814   54101 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 00:21:33.439917   54101 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:21:33.465266   54101 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 00:21:33.465277   54101 containerd.go:534] Images already preloaded, skipping extraction
	I1212 00:21:33.465345   54101 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:21:33.495685   54101 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 00:21:33.495696   54101 cache_images.go:86] Images are preloaded, skipping loading
	I1212 00:21:33.495703   54101 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1212 00:21:33.495800   54101 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-767012 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:21:33.495863   54101 ssh_runner.go:195] Run: sudo crictl info
	I1212 00:21:33.520655   54101 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1212 00:21:33.520679   54101 cni.go:84] Creating CNI manager for ""
	I1212 00:21:33.520688   54101 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 00:21:33.520701   54101 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 00:21:33.520721   54101 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-767012 NodeName:functional-767012 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:21:33.520840   54101 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-767012"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:21:33.520909   54101 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 00:21:33.528771   54101 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 00:21:33.528832   54101 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:21:33.537845   54101 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1212 00:21:33.552578   54101 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 00:21:33.567275   54101 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1212 00:21:33.581608   54101 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:21:33.586017   54101 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:21:33.720787   54101 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:21:34.285938   54101 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012 for IP: 192.168.49.2
	I1212 00:21:34.285949   54101 certs.go:195] generating shared ca certs ...
	I1212 00:21:34.285964   54101 certs.go:227] acquiring lock for ca certs: {Name:mk18ed2fce74cbc4ee01c0f71e2dbdd98ccce1cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:21:34.286114   54101 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key
	I1212 00:21:34.286160   54101 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key
	I1212 00:21:34.286167   54101 certs.go:257] generating profile certs ...
	I1212 00:21:34.286262   54101 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.key
	I1212 00:21:34.286326   54101 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.key.fcbff5a4
	I1212 00:21:34.286371   54101 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.key
	I1212 00:21:34.286484   54101 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem (1338 bytes)
	W1212 00:21:34.286514   54101 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290_empty.pem, impossibly tiny 0 bytes
	I1212 00:21:34.286521   54101 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 00:21:34.286547   54101 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem (1082 bytes)
	I1212 00:21:34.286569   54101 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:21:34.286590   54101 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem (1675 bytes)
	I1212 00:21:34.286633   54101 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem (1708 bytes)
	I1212 00:21:34.287348   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:21:34.308553   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 00:21:34.331894   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:21:34.355464   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:21:34.374443   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 00:21:34.393434   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 00:21:34.411599   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:21:34.429619   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:21:34.447321   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /usr/share/ca-certificates/42902.pem (1708 bytes)
	I1212 00:21:34.464997   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:21:34.482627   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem --> /usr/share/ca-certificates/4290.pem (1338 bytes)
	I1212 00:21:34.500926   54101 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:21:34.513622   54101 ssh_runner.go:195] Run: openssl version
	I1212 00:21:34.519764   54101 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4290.pem
	I1212 00:21:34.527069   54101 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4290.pem /etc/ssl/certs/4290.pem
	I1212 00:21:34.534472   54101 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4290.pem
	I1212 00:21:34.538121   54101 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:06 /usr/share/ca-certificates/4290.pem
	I1212 00:21:34.538179   54101 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4290.pem
	I1212 00:21:34.579437   54101 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 00:21:34.586891   54101 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42902.pem
	I1212 00:21:34.594262   54101 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42902.pem /etc/ssl/certs/42902.pem
	I1212 00:21:34.601868   54101 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42902.pem
	I1212 00:21:34.605501   54101 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:06 /usr/share/ca-certificates/42902.pem
	I1212 00:21:34.605557   54101 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42902.pem
	I1212 00:21:34.646393   54101 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 00:21:34.653807   54101 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:21:34.661225   54101 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:21:34.668768   54101 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:21:34.672511   54101 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:21:34.672567   54101 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:21:34.713655   54101 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:21:34.721031   54101 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:21:34.724786   54101 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 00:21:34.765815   54101 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 00:21:34.806690   54101 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 00:21:34.847558   54101 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 00:21:34.888576   54101 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 00:21:34.933434   54101 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 00:21:34.978399   54101 kubeadm.go:401] StartCluster: {Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:21:34.978479   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1212 00:21:34.978543   54101 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:21:35.017576   54101 cri.go:89] found id: ""
	I1212 00:21:35.017638   54101 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:21:35.026096   54101 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 00:21:35.026118   54101 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 00:21:35.026171   54101 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 00:21:35.034785   54101 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:21:35.035314   54101 kubeconfig.go:125] found "functional-767012" server: "https://192.168.49.2:8441"
	I1212 00:21:35.036573   54101 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 00:21:35.046414   54101 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-12 00:07:00.613095536 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-12 00:21:33.576611675 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1212 00:21:35.046427   54101 kubeadm.go:1161] stopping kube-system containers ...
	I1212 00:21:35.046437   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1212 00:21:35.046492   54101 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:21:35.082797   54101 cri.go:89] found id: ""
	I1212 00:21:35.082857   54101 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 00:21:35.102877   54101 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:21:35.111403   54101 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 12 00:11 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 12 00:11 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 12 00:11 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 12 00:11 /etc/kubernetes/scheduler.conf
	
	I1212 00:21:35.111465   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 00:21:35.120302   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 00:21:35.128075   54101 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:21:35.128131   54101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:21:35.135780   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 00:21:35.143743   54101 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:21:35.143796   54101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:21:35.151555   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 00:21:35.159766   54101 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:21:35.159823   54101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:21:35.167617   54101 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:21:35.175675   54101 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:21:35.223997   54101 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:21:36.520500   54101 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.296478898s)
	I1212 00:21:36.520559   54101 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:21:36.729554   54101 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:21:36.788511   54101 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:21:36.835897   54101 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:21:36.835964   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:37.336817   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:37.836795   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:38.336842   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:38.836903   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:39.336145   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:39.836069   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:40.336948   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:40.837012   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:41.336101   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:41.836925   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:42.336725   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:42.836125   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:43.336921   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:43.836180   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:44.336837   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:44.836956   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:45.336777   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:45.836993   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:46.336836   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:46.836176   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:47.336095   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:47.836055   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:48.336741   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:48.836121   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:49.336917   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:49.836413   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:50.336092   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:50.836150   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:51.337033   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:51.836957   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:52.336084   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:52.836739   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:53.336118   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:53.836933   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:54.336879   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:54.836792   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:55.336817   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:55.836920   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:56.336115   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:56.836712   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:57.336349   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:57.836961   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:58.336641   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:58.836512   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:59.336849   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:59.836072   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:00.336133   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:00.836802   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:01.336983   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:01.836131   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:02.336195   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:02.836806   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:03.336993   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:03.837131   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:04.336098   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:04.836118   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:05.336315   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:05.837043   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:06.336091   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:06.836161   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:07.336123   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:07.836157   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:08.336214   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:08.836176   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:09.336152   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:09.836160   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:10.336024   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:10.836954   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:11.337041   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:11.836824   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:12.336075   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:12.836181   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:13.336397   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:13.836099   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:14.336156   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:14.836195   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:15.336313   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:15.836956   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:16.336943   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:16.836999   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:17.336149   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:17.836085   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:18.336339   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:18.836154   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:19.336945   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:19.836761   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:20.336721   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:20.837012   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:21.336764   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:21.836154   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:22.336203   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:22.836095   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:23.336255   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:23.836950   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:24.336879   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:24.836852   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:25.336786   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:25.836079   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:26.336767   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:26.836193   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:27.336157   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:27.836115   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:28.336182   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:28.836772   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:29.336188   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:29.836047   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:30.336792   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:30.836649   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:31.337030   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:31.836180   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:32.336198   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:32.837057   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:33.336991   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:33.836801   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:34.336920   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:34.836119   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:35.337050   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:35.836716   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:36.336423   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:36.836018   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:22:36.836096   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:22:36.862427   54101 cri.go:89] found id: ""
	I1212 00:22:36.862441   54101 logs.go:282] 0 containers: []
	W1212 00:22:36.862448   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:22:36.862453   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:22:36.862517   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:22:36.892149   54101 cri.go:89] found id: ""
	I1212 00:22:36.892163   54101 logs.go:282] 0 containers: []
	W1212 00:22:36.892169   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:22:36.892175   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:22:36.892234   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:22:36.916655   54101 cri.go:89] found id: ""
	I1212 00:22:36.916670   54101 logs.go:282] 0 containers: []
	W1212 00:22:36.916677   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:22:36.916681   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:22:36.916753   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:22:36.945533   54101 cri.go:89] found id: ""
	I1212 00:22:36.945546   54101 logs.go:282] 0 containers: []
	W1212 00:22:36.945554   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:22:36.945559   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:22:36.945616   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:22:36.970456   54101 cri.go:89] found id: ""
	I1212 00:22:36.970469   54101 logs.go:282] 0 containers: []
	W1212 00:22:36.970477   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:22:36.970482   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:22:36.970556   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:22:36.997550   54101 cri.go:89] found id: ""
	I1212 00:22:36.997568   54101 logs.go:282] 0 containers: []
	W1212 00:22:36.997577   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:22:36.997582   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:22:36.997656   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:22:37.043296   54101 cri.go:89] found id: ""
	I1212 00:22:37.043319   54101 logs.go:282] 0 containers: []
	W1212 00:22:37.043326   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:22:37.043334   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:22:37.043344   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:22:37.115314   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:22:37.115335   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:22:37.126489   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:22:37.126505   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:22:37.191880   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:22:37.183564   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:37.183995   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:37.185555   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:37.185892   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:37.187528   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:22:37.183564   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:37.183995   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:37.185555   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:37.185892   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:37.187528   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:22:37.191890   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:22:37.191900   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:22:37.253331   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:22:37.253349   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:22:39.783593   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:39.793972   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:22:39.794055   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:22:39.822155   54101 cri.go:89] found id: ""
	I1212 00:22:39.822169   54101 logs.go:282] 0 containers: []
	W1212 00:22:39.822176   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:22:39.822181   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:22:39.822250   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:22:39.847125   54101 cri.go:89] found id: ""
	I1212 00:22:39.847138   54101 logs.go:282] 0 containers: []
	W1212 00:22:39.847145   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:22:39.847150   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:22:39.847210   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:22:39.872050   54101 cri.go:89] found id: ""
	I1212 00:22:39.872064   54101 logs.go:282] 0 containers: []
	W1212 00:22:39.872072   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:22:39.872077   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:22:39.872143   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:22:39.896579   54101 cri.go:89] found id: ""
	I1212 00:22:39.896592   54101 logs.go:282] 0 containers: []
	W1212 00:22:39.896599   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:22:39.896606   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:22:39.896664   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:22:39.921505   54101 cri.go:89] found id: ""
	I1212 00:22:39.921520   54101 logs.go:282] 0 containers: []
	W1212 00:22:39.921537   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:22:39.921543   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:22:39.921602   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:22:39.949647   54101 cri.go:89] found id: ""
	I1212 00:22:39.949660   54101 logs.go:282] 0 containers: []
	W1212 00:22:39.949667   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:22:39.949672   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:22:39.949739   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:22:39.972863   54101 cri.go:89] found id: ""
	I1212 00:22:39.972877   54101 logs.go:282] 0 containers: []
	W1212 00:22:39.972886   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:22:39.972894   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:22:39.972904   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:22:39.983379   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:22:39.983394   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:22:40.083583   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:22:40.071923   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:40.073365   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:40.075724   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:40.076148   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:40.078746   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:22:40.071923   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:40.073365   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:40.075724   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:40.076148   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:40.078746   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:22:40.083593   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:22:40.083604   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:22:40.153645   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:22:40.153664   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:22:40.181452   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:22:40.181471   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:22:42.742128   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:42.752298   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:22:42.752357   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:22:42.777204   54101 cri.go:89] found id: ""
	I1212 00:22:42.777218   54101 logs.go:282] 0 containers: []
	W1212 00:22:42.777225   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:22:42.777236   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:22:42.777295   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:22:42.801649   54101 cri.go:89] found id: ""
	I1212 00:22:42.801663   54101 logs.go:282] 0 containers: []
	W1212 00:22:42.801670   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:22:42.801675   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:22:42.801731   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:22:42.826035   54101 cri.go:89] found id: ""
	I1212 00:22:42.826048   54101 logs.go:282] 0 containers: []
	W1212 00:22:42.826055   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:22:42.826059   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:22:42.826131   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:22:42.853290   54101 cri.go:89] found id: ""
	I1212 00:22:42.853303   54101 logs.go:282] 0 containers: []
	W1212 00:22:42.853310   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:22:42.853316   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:22:42.853372   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:22:42.880012   54101 cri.go:89] found id: ""
	I1212 00:22:42.880025   54101 logs.go:282] 0 containers: []
	W1212 00:22:42.880033   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:22:42.880037   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:22:42.880097   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:22:42.909253   54101 cri.go:89] found id: ""
	I1212 00:22:42.909267   54101 logs.go:282] 0 containers: []
	W1212 00:22:42.909274   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:22:42.909279   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:22:42.909335   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:22:42.936731   54101 cri.go:89] found id: ""
	I1212 00:22:42.936745   54101 logs.go:282] 0 containers: []
	W1212 00:22:42.936756   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:22:42.936764   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:22:42.936782   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:22:42.991768   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:22:42.991787   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:22:43.005267   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:22:43.005283   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:22:43.089221   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:22:43.080335   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:43.081099   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:43.082720   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:43.083301   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:43.084856   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:22:43.080335   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:43.081099   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:43.082720   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:43.083301   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:43.084856   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:22:43.089233   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:22:43.089244   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:22:43.153170   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:22:43.153191   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:22:45.684515   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:45.696038   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:22:45.696106   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:22:45.721408   54101 cri.go:89] found id: ""
	I1212 00:22:45.721422   54101 logs.go:282] 0 containers: []
	W1212 00:22:45.721439   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:22:45.721446   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:22:45.721518   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:22:45.746760   54101 cri.go:89] found id: ""
	I1212 00:22:45.746774   54101 logs.go:282] 0 containers: []
	W1212 00:22:45.746781   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:22:45.746794   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:22:45.746852   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:22:45.784086   54101 cri.go:89] found id: ""
	I1212 00:22:45.784100   54101 logs.go:282] 0 containers: []
	W1212 00:22:45.784107   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:22:45.784113   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:22:45.784196   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:22:45.809513   54101 cri.go:89] found id: ""
	I1212 00:22:45.809527   54101 logs.go:282] 0 containers: []
	W1212 00:22:45.809534   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:22:45.809547   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:22:45.809603   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:22:45.833922   54101 cri.go:89] found id: ""
	I1212 00:22:45.833935   54101 logs.go:282] 0 containers: []
	W1212 00:22:45.833943   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:22:45.833957   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:22:45.834020   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:22:45.858716   54101 cri.go:89] found id: ""
	I1212 00:22:45.858738   54101 logs.go:282] 0 containers: []
	W1212 00:22:45.858745   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:22:45.858751   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:22:45.858819   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:22:45.886125   54101 cri.go:89] found id: ""
	I1212 00:22:45.886140   54101 logs.go:282] 0 containers: []
	W1212 00:22:45.886161   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:22:45.886170   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:22:45.886181   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:22:45.913706   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:22:45.913723   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:22:45.972155   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:22:45.972173   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:22:45.982756   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:22:45.982771   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:22:46.057549   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:22:46.048888   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:46.049652   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:46.050838   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:46.051562   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:46.053189   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:22:46.048888   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:46.049652   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:46.050838   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:46.051562   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:46.053189   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:22:46.057568   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:22:46.057589   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:22:48.631952   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:48.641871   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:22:48.641945   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:22:48.667026   54101 cri.go:89] found id: ""
	I1212 00:22:48.667040   54101 logs.go:282] 0 containers: []
	W1212 00:22:48.667047   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:22:48.667052   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:22:48.667111   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:22:48.694393   54101 cri.go:89] found id: ""
	I1212 00:22:48.694407   54101 logs.go:282] 0 containers: []
	W1212 00:22:48.694414   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:22:48.694419   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:22:48.694479   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:22:48.723393   54101 cri.go:89] found id: ""
	I1212 00:22:48.723406   54101 logs.go:282] 0 containers: []
	W1212 00:22:48.723413   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:22:48.723418   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:22:48.723480   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:22:48.749414   54101 cri.go:89] found id: ""
	I1212 00:22:48.749427   54101 logs.go:282] 0 containers: []
	W1212 00:22:48.749434   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:22:48.749440   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:22:48.749500   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:22:48.773494   54101 cri.go:89] found id: ""
	I1212 00:22:48.773508   54101 logs.go:282] 0 containers: []
	W1212 00:22:48.773514   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:22:48.773520   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:22:48.773584   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:22:48.798476   54101 cri.go:89] found id: ""
	I1212 00:22:48.798490   54101 logs.go:282] 0 containers: []
	W1212 00:22:48.798497   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:22:48.798502   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:22:48.798570   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:22:48.823097   54101 cri.go:89] found id: ""
	I1212 00:22:48.823112   54101 logs.go:282] 0 containers: []
	W1212 00:22:48.823119   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:22:48.823127   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:22:48.823136   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:22:48.884369   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:22:48.884390   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:22:48.918017   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:22:48.918032   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:22:48.974636   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:22:48.974656   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:22:48.985524   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:22:48.985540   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:22:49.075379   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:22:49.063866   11166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:49.064550   11166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:49.067284   11166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:49.067979   11166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:49.070881   11166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:22:49.063866   11166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:49.064550   11166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:49.067284   11166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:49.067979   11166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:49.070881   11166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:22:51.575612   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:51.585822   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:22:51.585880   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:22:51.611290   54101 cri.go:89] found id: ""
	I1212 00:22:51.611304   54101 logs.go:282] 0 containers: []
	W1212 00:22:51.611311   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:22:51.611317   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:22:51.611376   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:22:51.638852   54101 cri.go:89] found id: ""
	I1212 00:22:51.638868   54101 logs.go:282] 0 containers: []
	W1212 00:22:51.638875   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:22:51.638882   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:22:51.638941   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:22:51.663831   54101 cri.go:89] found id: ""
	I1212 00:22:51.663845   54101 logs.go:282] 0 containers: []
	W1212 00:22:51.663852   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:22:51.663857   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:22:51.663914   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:22:51.689264   54101 cri.go:89] found id: ""
	I1212 00:22:51.689278   54101 logs.go:282] 0 containers: []
	W1212 00:22:51.689286   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:22:51.689291   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:22:51.689350   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:22:51.714774   54101 cri.go:89] found id: ""
	I1212 00:22:51.714788   54101 logs.go:282] 0 containers: []
	W1212 00:22:51.714795   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:22:51.714800   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:22:51.714889   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:22:51.739800   54101 cri.go:89] found id: ""
	I1212 00:22:51.739814   54101 logs.go:282] 0 containers: []
	W1212 00:22:51.739822   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:22:51.739827   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:22:51.739885   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:22:51.767107   54101 cri.go:89] found id: ""
	I1212 00:22:51.767134   54101 logs.go:282] 0 containers: []
	W1212 00:22:51.767142   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:22:51.767150   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:22:51.767160   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:22:51.821534   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:22:51.821552   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:22:51.832147   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:22:51.832161   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:22:51.897869   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:22:51.890100   11256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:51.890663   11256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:51.892157   11256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:51.892582   11256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:51.894067   11256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:22:51.890100   11256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:51.890663   11256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:51.892157   11256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:51.892582   11256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:51.894067   11256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:22:51.897889   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:22:51.897899   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:22:51.958502   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:22:51.958519   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:22:54.487348   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:54.497592   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:22:54.497655   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:22:54.524765   54101 cri.go:89] found id: ""
	I1212 00:22:54.524779   54101 logs.go:282] 0 containers: []
	W1212 00:22:54.524787   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:22:54.524800   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:22:54.524860   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:22:54.549685   54101 cri.go:89] found id: ""
	I1212 00:22:54.549699   54101 logs.go:282] 0 containers: []
	W1212 00:22:54.549706   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:22:54.549710   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:22:54.549766   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:22:54.573523   54101 cri.go:89] found id: ""
	I1212 00:22:54.573537   54101 logs.go:282] 0 containers: []
	W1212 00:22:54.573544   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:22:54.573549   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:22:54.573607   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:22:54.602326   54101 cri.go:89] found id: ""
	I1212 00:22:54.602342   54101 logs.go:282] 0 containers: []
	W1212 00:22:54.602349   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:22:54.602354   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:22:54.602411   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:22:54.626746   54101 cri.go:89] found id: ""
	I1212 00:22:54.626777   54101 logs.go:282] 0 containers: []
	W1212 00:22:54.626784   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:22:54.626792   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:22:54.626860   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:22:54.652678   54101 cri.go:89] found id: ""
	I1212 00:22:54.652693   54101 logs.go:282] 0 containers: []
	W1212 00:22:54.652715   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:22:54.652720   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:22:54.652789   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:22:54.677588   54101 cri.go:89] found id: ""
	I1212 00:22:54.677602   54101 logs.go:282] 0 containers: []
	W1212 00:22:54.677609   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:22:54.677617   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:22:54.677627   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:22:54.733727   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:22:54.733750   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:22:54.744434   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:22:54.744450   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:22:54.810290   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:22:54.802232   11361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:54.802635   11361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:54.804258   11361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:54.804924   11361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:54.806440   11361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:22:54.802232   11361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:54.802635   11361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:54.804258   11361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:54.804924   11361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:54.806440   11361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:22:54.810301   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:22:54.810311   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:22:54.869777   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:22:54.869794   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:22:57.396960   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:57.406761   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:22:57.406819   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:22:57.431202   54101 cri.go:89] found id: ""
	I1212 00:22:57.431216   54101 logs.go:282] 0 containers: []
	W1212 00:22:57.431223   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:22:57.431228   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:22:57.431285   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:22:57.456103   54101 cri.go:89] found id: ""
	I1212 00:22:57.456116   54101 logs.go:282] 0 containers: []
	W1212 00:22:57.456123   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:22:57.456129   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:22:57.456185   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:22:57.482677   54101 cri.go:89] found id: ""
	I1212 00:22:57.482690   54101 logs.go:282] 0 containers: []
	W1212 00:22:57.482697   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:22:57.482703   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:22:57.482776   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:22:57.507899   54101 cri.go:89] found id: ""
	I1212 00:22:57.507912   54101 logs.go:282] 0 containers: []
	W1212 00:22:57.507919   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:22:57.507925   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:22:57.507986   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:22:57.536079   54101 cri.go:89] found id: ""
	I1212 00:22:57.536093   54101 logs.go:282] 0 containers: []
	W1212 00:22:57.536101   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:22:57.536106   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:22:57.536167   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:22:57.564822   54101 cri.go:89] found id: ""
	I1212 00:22:57.564836   54101 logs.go:282] 0 containers: []
	W1212 00:22:57.564843   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:22:57.564857   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:22:57.564923   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:22:57.589921   54101 cri.go:89] found id: ""
	I1212 00:22:57.589935   54101 logs.go:282] 0 containers: []
	W1212 00:22:57.589943   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:22:57.589951   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:22:57.589961   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:22:57.648534   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:22:57.648552   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:22:57.659464   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:22:57.659481   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:22:57.727477   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:22:57.718925   11464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:57.719812   11464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:57.721551   11464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:57.722035   11464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:57.723542   11464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:22:57.718925   11464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:57.719812   11464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:57.721551   11464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:57.722035   11464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:57.723542   11464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:22:57.727497   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:22:57.727508   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:22:57.791545   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:22:57.791567   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:00.319474   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:00.337512   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:00.337596   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:00.386008   54101 cri.go:89] found id: ""
	I1212 00:23:00.386034   54101 logs.go:282] 0 containers: []
	W1212 00:23:00.386042   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:00.386048   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:00.386118   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:00.435933   54101 cri.go:89] found id: ""
	I1212 00:23:00.435948   54101 logs.go:282] 0 containers: []
	W1212 00:23:00.435961   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:00.435966   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:00.436033   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:00.464332   54101 cri.go:89] found id: ""
	I1212 00:23:00.464347   54101 logs.go:282] 0 containers: []
	W1212 00:23:00.464354   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:00.464360   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:00.464438   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:00.492272   54101 cri.go:89] found id: ""
	I1212 00:23:00.492288   54101 logs.go:282] 0 containers: []
	W1212 00:23:00.492296   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:00.492308   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:00.492399   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:00.523157   54101 cri.go:89] found id: ""
	I1212 00:23:00.523172   54101 logs.go:282] 0 containers: []
	W1212 00:23:00.523180   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:00.523185   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:00.523251   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:00.551205   54101 cri.go:89] found id: ""
	I1212 00:23:00.551219   54101 logs.go:282] 0 containers: []
	W1212 00:23:00.551227   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:00.551232   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:00.551303   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:00.581595   54101 cri.go:89] found id: ""
	I1212 00:23:00.581609   54101 logs.go:282] 0 containers: []
	W1212 00:23:00.581616   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:00.581624   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:00.581637   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:00.638838   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:00.638857   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:00.650126   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:00.650141   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:00.717921   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:00.707574   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:00.709178   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:00.709927   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:00.711724   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:00.712419   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:00.707574   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:00.709178   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:00.709927   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:00.711724   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:00.712419   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:00.717933   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:00.717947   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:00.780105   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:00.780123   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:03.311322   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:03.323283   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:03.323344   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:03.361266   54101 cri.go:89] found id: ""
	I1212 00:23:03.361281   54101 logs.go:282] 0 containers: []
	W1212 00:23:03.361288   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:03.361293   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:03.361353   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:03.386333   54101 cri.go:89] found id: ""
	I1212 00:23:03.386347   54101 logs.go:282] 0 containers: []
	W1212 00:23:03.386353   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:03.386363   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:03.386421   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:03.413227   54101 cri.go:89] found id: ""
	I1212 00:23:03.413241   54101 logs.go:282] 0 containers: []
	W1212 00:23:03.413248   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:03.413253   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:03.413310   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:03.437970   54101 cri.go:89] found id: ""
	I1212 00:23:03.437991   54101 logs.go:282] 0 containers: []
	W1212 00:23:03.437999   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:03.438004   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:03.438060   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:03.466477   54101 cri.go:89] found id: ""
	I1212 00:23:03.466491   54101 logs.go:282] 0 containers: []
	W1212 00:23:03.466499   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:03.466504   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:03.466561   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:03.491808   54101 cri.go:89] found id: ""
	I1212 00:23:03.491821   54101 logs.go:282] 0 containers: []
	W1212 00:23:03.491828   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:03.491834   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:03.491890   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:03.517149   54101 cri.go:89] found id: ""
	I1212 00:23:03.517163   54101 logs.go:282] 0 containers: []
	W1212 00:23:03.517170   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:03.517177   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:03.517187   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:03.572746   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:03.572773   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:03.584001   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:03.584018   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:03.656247   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:03.647626   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:03.648470   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:03.650161   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:03.650723   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:03.652396   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:03.647626   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:03.648470   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:03.650161   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:03.650723   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:03.652396   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:03.656257   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:03.656268   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:03.722945   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:03.722971   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:06.251078   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:06.261552   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:06.261613   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:06.289582   54101 cri.go:89] found id: ""
	I1212 00:23:06.289597   54101 logs.go:282] 0 containers: []
	W1212 00:23:06.289605   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:06.289610   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:06.289673   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:06.317842   54101 cri.go:89] found id: ""
	I1212 00:23:06.317855   54101 logs.go:282] 0 containers: []
	W1212 00:23:06.317863   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:06.317868   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:06.317926   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:06.352672   54101 cri.go:89] found id: ""
	I1212 00:23:06.352685   54101 logs.go:282] 0 containers: []
	W1212 00:23:06.352692   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:06.352697   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:06.352752   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:06.382465   54101 cri.go:89] found id: ""
	I1212 00:23:06.382479   54101 logs.go:282] 0 containers: []
	W1212 00:23:06.382486   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:06.382491   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:06.382549   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:06.409293   54101 cri.go:89] found id: ""
	I1212 00:23:06.409307   54101 logs.go:282] 0 containers: []
	W1212 00:23:06.409325   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:06.409351   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:06.409419   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:06.437827   54101 cri.go:89] found id: ""
	I1212 00:23:06.437842   54101 logs.go:282] 0 containers: []
	W1212 00:23:06.437850   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:06.437855   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:06.437916   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:06.461631   54101 cri.go:89] found id: ""
	I1212 00:23:06.461645   54101 logs.go:282] 0 containers: []
	W1212 00:23:06.461652   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:06.461660   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:06.461672   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:06.524818   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:06.524837   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:06.555647   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:06.555663   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:06.613018   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:06.613037   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:06.623988   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:06.624004   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:06.689835   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:06.681072   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:06.681903   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:06.683626   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:06.684195   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:06.685841   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:06.681072   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:06.681903   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:06.683626   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:06.684195   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:06.685841   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:09.190077   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:09.199951   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:09.200011   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:09.224598   54101 cri.go:89] found id: ""
	I1212 00:23:09.224612   54101 logs.go:282] 0 containers: []
	W1212 00:23:09.224619   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:09.224624   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:09.224680   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:09.249246   54101 cri.go:89] found id: ""
	I1212 00:23:09.249259   54101 logs.go:282] 0 containers: []
	W1212 00:23:09.249266   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:09.249270   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:09.249326   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:09.276466   54101 cri.go:89] found id: ""
	I1212 00:23:09.276481   54101 logs.go:282] 0 containers: []
	W1212 00:23:09.276488   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:09.276493   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:09.276569   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:09.305292   54101 cri.go:89] found id: ""
	I1212 00:23:09.305306   54101 logs.go:282] 0 containers: []
	W1212 00:23:09.305320   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:09.305325   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:09.305385   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:09.340249   54101 cri.go:89] found id: ""
	I1212 00:23:09.340263   54101 logs.go:282] 0 containers: []
	W1212 00:23:09.340269   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:09.340274   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:09.340335   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:09.371473   54101 cri.go:89] found id: ""
	I1212 00:23:09.371487   54101 logs.go:282] 0 containers: []
	W1212 00:23:09.371494   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:09.371499   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:09.371560   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:09.396595   54101 cri.go:89] found id: ""
	I1212 00:23:09.396611   54101 logs.go:282] 0 containers: []
	W1212 00:23:09.396618   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:09.396626   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:09.396639   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:09.455271   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:09.455288   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:09.465948   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:09.465963   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:09.533532   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:09.524698   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:09.525522   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:09.527378   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:09.527995   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:09.529577   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:09.524698   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:09.525522   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:09.527378   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:09.527995   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:09.529577   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:09.533544   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:09.533554   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:09.595751   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:09.595769   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:12.124276   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:12.134222   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:12.134281   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:12.158363   54101 cri.go:89] found id: ""
	I1212 00:23:12.158377   54101 logs.go:282] 0 containers: []
	W1212 00:23:12.158384   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:12.158390   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:12.158446   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:12.181913   54101 cri.go:89] found id: ""
	I1212 00:23:12.181930   54101 logs.go:282] 0 containers: []
	W1212 00:23:12.181936   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:12.181941   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:12.181997   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:12.206035   54101 cri.go:89] found id: ""
	I1212 00:23:12.206048   54101 logs.go:282] 0 containers: []
	W1212 00:23:12.206055   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:12.206060   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:12.206119   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:12.234593   54101 cri.go:89] found id: ""
	I1212 00:23:12.234606   54101 logs.go:282] 0 containers: []
	W1212 00:23:12.234614   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:12.234618   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:12.234675   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:12.258839   54101 cri.go:89] found id: ""
	I1212 00:23:12.258853   54101 logs.go:282] 0 containers: []
	W1212 00:23:12.258867   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:12.258873   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:12.258931   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:12.295188   54101 cri.go:89] found id: ""
	I1212 00:23:12.295202   54101 logs.go:282] 0 containers: []
	W1212 00:23:12.295219   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:12.295225   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:12.295295   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:12.331819   54101 cri.go:89] found id: ""
	I1212 00:23:12.331833   54101 logs.go:282] 0 containers: []
	W1212 00:23:12.331851   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:12.331859   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:12.331869   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:12.392019   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:12.392036   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:12.402367   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:12.402383   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:12.463715   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:12.455582   11990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:12.455962   11990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:12.457659   11990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:12.458359   11990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:12.459974   11990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:12.455582   11990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:12.455962   11990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:12.457659   11990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:12.458359   11990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:12.459974   11990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:12.463724   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:12.463745   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:12.528182   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:12.528200   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:15.057258   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:15.068358   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:15.068421   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:15.094774   54101 cri.go:89] found id: ""
	I1212 00:23:15.094787   54101 logs.go:282] 0 containers: []
	W1212 00:23:15.094804   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:15.094812   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:15.094882   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:15.120167   54101 cri.go:89] found id: ""
	I1212 00:23:15.120180   54101 logs.go:282] 0 containers: []
	W1212 00:23:15.120188   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:15.120193   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:15.120249   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:15.150855   54101 cri.go:89] found id: ""
	I1212 00:23:15.150868   54101 logs.go:282] 0 containers: []
	W1212 00:23:15.150886   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:15.150891   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:15.150958   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:15.179684   54101 cri.go:89] found id: ""
	I1212 00:23:15.179697   54101 logs.go:282] 0 containers: []
	W1212 00:23:15.179704   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:15.179709   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:15.179784   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:15.204315   54101 cri.go:89] found id: ""
	I1212 00:23:15.204338   54101 logs.go:282] 0 containers: []
	W1212 00:23:15.204345   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:15.204350   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:15.204425   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:15.229074   54101 cri.go:89] found id: ""
	I1212 00:23:15.229088   54101 logs.go:282] 0 containers: []
	W1212 00:23:15.229095   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:15.229103   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:15.229168   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:15.253510   54101 cri.go:89] found id: ""
	I1212 00:23:15.253532   54101 logs.go:282] 0 containers: []
	W1212 00:23:15.253540   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:15.253548   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:15.253559   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:15.264299   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:15.264317   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:15.346071   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:15.332347   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:15.334627   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:15.335427   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:15.337189   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:15.337763   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:15.332347   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:15.334627   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:15.335427   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:15.337189   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:15.337763   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:15.346082   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:15.346092   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:15.414287   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:15.414306   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:15.440115   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:15.440130   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:17.999409   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:18.010537   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:18.010603   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:18.036961   54101 cri.go:89] found id: ""
	I1212 00:23:18.036975   54101 logs.go:282] 0 containers: []
	W1212 00:23:18.036982   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:18.036988   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:18.037047   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:18.065553   54101 cri.go:89] found id: ""
	I1212 00:23:18.065568   54101 logs.go:282] 0 containers: []
	W1212 00:23:18.065575   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:18.065582   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:18.065643   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:18.090902   54101 cri.go:89] found id: ""
	I1212 00:23:18.090916   54101 logs.go:282] 0 containers: []
	W1212 00:23:18.090923   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:18.090927   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:18.090987   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:18.120598   54101 cri.go:89] found id: ""
	I1212 00:23:18.120611   54101 logs.go:282] 0 containers: []
	W1212 00:23:18.120618   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:18.120623   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:18.120686   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:18.147780   54101 cri.go:89] found id: ""
	I1212 00:23:18.147794   54101 logs.go:282] 0 containers: []
	W1212 00:23:18.147801   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:18.147806   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:18.147863   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:18.176272   54101 cri.go:89] found id: ""
	I1212 00:23:18.176286   54101 logs.go:282] 0 containers: []
	W1212 00:23:18.176293   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:18.176306   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:18.176368   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:18.201024   54101 cri.go:89] found id: ""
	I1212 00:23:18.201037   54101 logs.go:282] 0 containers: []
	W1212 00:23:18.201045   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:18.201052   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:18.201062   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:18.211552   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:18.211566   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:18.274135   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:18.266305   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:18.266699   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:18.268383   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:18.268854   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:18.270264   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:18.266305   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:18.266699   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:18.268383   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:18.268854   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:18.270264   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:18.274145   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:18.274155   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:18.339516   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:18.339534   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:18.369221   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:18.369236   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:20.928503   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:20.938705   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:20.938771   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:20.966429   54101 cri.go:89] found id: ""
	I1212 00:23:20.966442   54101 logs.go:282] 0 containers: []
	W1212 00:23:20.966449   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:20.966463   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:20.966521   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:20.993659   54101 cri.go:89] found id: ""
	I1212 00:23:20.993674   54101 logs.go:282] 0 containers: []
	W1212 00:23:20.993694   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:20.993700   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:20.993783   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:21.021877   54101 cri.go:89] found id: ""
	I1212 00:23:21.021894   54101 logs.go:282] 0 containers: []
	W1212 00:23:21.021901   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:21.021907   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:21.021974   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:21.050301   54101 cri.go:89] found id: ""
	I1212 00:23:21.050315   54101 logs.go:282] 0 containers: []
	W1212 00:23:21.050333   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:21.050338   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:21.050394   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:21.074369   54101 cri.go:89] found id: ""
	I1212 00:23:21.074382   54101 logs.go:282] 0 containers: []
	W1212 00:23:21.074399   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:21.074404   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:21.074459   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:21.100847   54101 cri.go:89] found id: ""
	I1212 00:23:21.100860   54101 logs.go:282] 0 containers: []
	W1212 00:23:21.100867   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:21.100872   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:21.100930   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:21.129915   54101 cri.go:89] found id: ""
	I1212 00:23:21.129928   54101 logs.go:282] 0 containers: []
	W1212 00:23:21.129950   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:21.129958   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:21.129967   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:21.186387   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:21.186407   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:21.197421   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:21.197437   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:21.261078   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:21.252661   12293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:21.253431   12293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:21.255174   12293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:21.255799   12293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:21.257304   12293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:21.252661   12293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:21.253431   12293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:21.255174   12293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:21.255799   12293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:21.257304   12293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:21.261090   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:21.261104   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:21.326885   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:21.326903   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:23.859105   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:23.869083   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:23.869143   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:23.892667   54101 cri.go:89] found id: ""
	I1212 00:23:23.892681   54101 logs.go:282] 0 containers: []
	W1212 00:23:23.892688   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:23.892693   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:23.892755   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:23.916368   54101 cri.go:89] found id: ""
	I1212 00:23:23.916381   54101 logs.go:282] 0 containers: []
	W1212 00:23:23.916388   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:23.916393   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:23.916456   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:23.953674   54101 cri.go:89] found id: ""
	I1212 00:23:23.953688   54101 logs.go:282] 0 containers: []
	W1212 00:23:23.953695   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:23.953700   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:23.953755   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:23.977280   54101 cri.go:89] found id: ""
	I1212 00:23:23.977293   54101 logs.go:282] 0 containers: []
	W1212 00:23:23.977300   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:23.977305   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:23.977364   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:24.002961   54101 cri.go:89] found id: ""
	I1212 00:23:24.002985   54101 logs.go:282] 0 containers: []
	W1212 00:23:24.003014   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:24.003020   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:24.003098   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:24.034368   54101 cri.go:89] found id: ""
	I1212 00:23:24.034382   54101 logs.go:282] 0 containers: []
	W1212 00:23:24.034393   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:24.034398   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:24.034470   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:24.065761   54101 cri.go:89] found id: ""
	I1212 00:23:24.065775   54101 logs.go:282] 0 containers: []
	W1212 00:23:24.065788   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:24.065796   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:24.065806   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:24.122870   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:24.122890   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:24.134384   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:24.134398   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:24.204008   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:24.196235   12399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:24.196812   12399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:24.198515   12399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:24.198869   12399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:24.200088   12399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:24.196235   12399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:24.196812   12399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:24.198515   12399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:24.198869   12399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:24.200088   12399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:24.204018   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:24.204029   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:24.268817   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:24.268835   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:26.805407   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:26.815561   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:26.815619   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:26.843361   54101 cri.go:89] found id: ""
	I1212 00:23:26.843375   54101 logs.go:282] 0 containers: []
	W1212 00:23:26.843382   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:26.843388   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:26.843447   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:26.867615   54101 cri.go:89] found id: ""
	I1212 00:23:26.867630   54101 logs.go:282] 0 containers: []
	W1212 00:23:26.867637   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:26.867642   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:26.867698   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:26.897089   54101 cri.go:89] found id: ""
	I1212 00:23:26.897102   54101 logs.go:282] 0 containers: []
	W1212 00:23:26.897109   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:26.897114   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:26.897173   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:26.920797   54101 cri.go:89] found id: ""
	I1212 00:23:26.920810   54101 logs.go:282] 0 containers: []
	W1212 00:23:26.920817   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:26.920822   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:26.920878   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:26.948949   54101 cri.go:89] found id: ""
	I1212 00:23:26.948963   54101 logs.go:282] 0 containers: []
	W1212 00:23:26.948970   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:26.948975   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:26.949034   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:26.972541   54101 cri.go:89] found id: ""
	I1212 00:23:26.972555   54101 logs.go:282] 0 containers: []
	W1212 00:23:26.972563   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:26.972568   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:26.972631   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:26.998049   54101 cri.go:89] found id: ""
	I1212 00:23:26.998065   54101 logs.go:282] 0 containers: []
	W1212 00:23:26.998073   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:26.998089   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:26.998102   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:27.027523   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:27.027538   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:27.085127   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:27.085146   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:27.096087   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:27.096101   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:27.162090   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:27.153308   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:27.154010   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:27.155943   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:27.156645   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:27.158348   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:27.153308   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:27.154010   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:27.155943   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:27.156645   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:27.158348   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:27.162101   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:27.162111   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:29.728366   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:29.738393   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:29.738452   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:29.764004   54101 cri.go:89] found id: ""
	I1212 00:23:29.764017   54101 logs.go:282] 0 containers: []
	W1212 00:23:29.764024   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:29.764029   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:29.764089   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:29.787843   54101 cri.go:89] found id: ""
	I1212 00:23:29.787857   54101 logs.go:282] 0 containers: []
	W1212 00:23:29.787874   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:29.787879   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:29.787936   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:29.812859   54101 cri.go:89] found id: ""
	I1212 00:23:29.812872   54101 logs.go:282] 0 containers: []
	W1212 00:23:29.812879   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:29.812884   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:29.812941   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:29.837580   54101 cri.go:89] found id: ""
	I1212 00:23:29.837593   54101 logs.go:282] 0 containers: []
	W1212 00:23:29.837600   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:29.837605   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:29.837673   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:29.861535   54101 cri.go:89] found id: ""
	I1212 00:23:29.861560   54101 logs.go:282] 0 containers: []
	W1212 00:23:29.861567   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:29.861572   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:29.861644   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:29.886533   54101 cri.go:89] found id: ""
	I1212 00:23:29.886546   54101 logs.go:282] 0 containers: []
	W1212 00:23:29.886553   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:29.886559   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:29.886624   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:29.913577   54101 cri.go:89] found id: ""
	I1212 00:23:29.913604   54101 logs.go:282] 0 containers: []
	W1212 00:23:29.913611   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:29.913619   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:29.913630   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:29.940660   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:29.940675   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:29.995286   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:29.995307   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:30.029235   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:30.029252   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:30.103143   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:30.093717   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:30.094664   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:30.096287   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:30.096764   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:30.098381   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:30.093717   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:30.094664   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:30.096287   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:30.096764   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:30.098381   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:30.103157   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:30.103168   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:32.666081   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:32.676000   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:32.676071   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:32.701112   54101 cri.go:89] found id: ""
	I1212 00:23:32.701125   54101 logs.go:282] 0 containers: []
	W1212 00:23:32.701133   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:32.701138   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:32.701195   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:32.727727   54101 cri.go:89] found id: ""
	I1212 00:23:32.727741   54101 logs.go:282] 0 containers: []
	W1212 00:23:32.727748   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:32.727753   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:32.727810   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:32.756561   54101 cri.go:89] found id: ""
	I1212 00:23:32.756574   54101 logs.go:282] 0 containers: []
	W1212 00:23:32.756581   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:32.756586   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:32.756648   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:32.781745   54101 cri.go:89] found id: ""
	I1212 00:23:32.781758   54101 logs.go:282] 0 containers: []
	W1212 00:23:32.781765   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:32.781771   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:32.781830   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:32.807544   54101 cri.go:89] found id: ""
	I1212 00:23:32.807558   54101 logs.go:282] 0 containers: []
	W1212 00:23:32.807571   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:32.807576   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:32.807634   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:32.837232   54101 cri.go:89] found id: ""
	I1212 00:23:32.837246   54101 logs.go:282] 0 containers: []
	W1212 00:23:32.837253   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:32.837259   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:32.837321   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:32.864631   54101 cri.go:89] found id: ""
	I1212 00:23:32.864645   54101 logs.go:282] 0 containers: []
	W1212 00:23:32.864660   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:32.864667   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:32.864678   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:32.927240   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:32.919009   12700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:32.919629   12700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:32.921337   12700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:32.921842   12700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:32.923382   12700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:32.919009   12700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:32.919629   12700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:32.921337   12700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:32.921842   12700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:32.923382   12700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:32.927249   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:32.927276   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:32.990198   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:32.990226   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:33.020370   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:33.020389   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:33.077339   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:33.077359   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:35.589167   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:35.599047   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:35.599105   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:35.624300   54101 cri.go:89] found id: ""
	I1212 00:23:35.624315   54101 logs.go:282] 0 containers: []
	W1212 00:23:35.624322   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:35.624327   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:35.624387   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:35.647815   54101 cri.go:89] found id: ""
	I1212 00:23:35.647829   54101 logs.go:282] 0 containers: []
	W1212 00:23:35.647837   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:35.647842   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:35.647900   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:35.676530   54101 cri.go:89] found id: ""
	I1212 00:23:35.676544   54101 logs.go:282] 0 containers: []
	W1212 00:23:35.676551   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:35.676556   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:35.676617   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:35.705816   54101 cri.go:89] found id: ""
	I1212 00:23:35.705831   54101 logs.go:282] 0 containers: []
	W1212 00:23:35.705838   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:35.705844   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:35.705903   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:35.733393   54101 cri.go:89] found id: ""
	I1212 00:23:35.733413   54101 logs.go:282] 0 containers: []
	W1212 00:23:35.733421   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:35.733426   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:35.733485   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:35.757717   54101 cri.go:89] found id: ""
	I1212 00:23:35.757731   54101 logs.go:282] 0 containers: []
	W1212 00:23:35.757738   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:35.757743   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:35.757800   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:35.782446   54101 cri.go:89] found id: ""
	I1212 00:23:35.782459   54101 logs.go:282] 0 containers: []
	W1212 00:23:35.782478   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:35.782487   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:35.782497   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:35.839811   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:35.839828   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:35.850443   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:35.850458   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:35.918359   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:35.910728   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:35.911186   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:35.912701   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:35.913021   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:35.914471   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:35.910728   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:35.911186   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:35.912701   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:35.913021   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:35.914471   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:35.918370   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:35.918382   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:35.980124   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:35.980143   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:38.530800   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:38.542531   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:38.542599   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:38.568754   54101 cri.go:89] found id: ""
	I1212 00:23:38.568767   54101 logs.go:282] 0 containers: []
	W1212 00:23:38.568774   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:38.568788   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:38.568846   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:38.598747   54101 cri.go:89] found id: ""
	I1212 00:23:38.598759   54101 logs.go:282] 0 containers: []
	W1212 00:23:38.598766   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:38.598771   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:38.598838   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:38.623489   54101 cri.go:89] found id: ""
	I1212 00:23:38.623503   54101 logs.go:282] 0 containers: []
	W1212 00:23:38.623519   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:38.623525   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:38.623594   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:38.648000   54101 cri.go:89] found id: ""
	I1212 00:23:38.648013   54101 logs.go:282] 0 containers: []
	W1212 00:23:38.648022   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:38.648027   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:38.648084   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:38.674721   54101 cri.go:89] found id: ""
	I1212 00:23:38.674734   54101 logs.go:282] 0 containers: []
	W1212 00:23:38.674741   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:38.674746   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:38.674808   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:38.700695   54101 cri.go:89] found id: ""
	I1212 00:23:38.700708   54101 logs.go:282] 0 containers: []
	W1212 00:23:38.700715   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:38.700720   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:38.700780   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:38.724873   54101 cri.go:89] found id: ""
	I1212 00:23:38.724886   54101 logs.go:282] 0 containers: []
	W1212 00:23:38.724892   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:38.724900   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:38.724910   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:38.751419   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:38.751434   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:38.807512   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:38.807530   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:38.818972   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:38.819002   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:38.889413   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:38.879843   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:38.881217   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:38.881803   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:38.883544   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:38.884066   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:38.879843   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:38.881217   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:38.881803   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:38.883544   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:38.884066   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:38.889425   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:38.889435   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:41.452716   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:41.462650   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:41.462718   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:41.487241   54101 cri.go:89] found id: ""
	I1212 00:23:41.487264   54101 logs.go:282] 0 containers: []
	W1212 00:23:41.487271   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:41.487277   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:41.487335   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:41.511441   54101 cri.go:89] found id: ""
	I1212 00:23:41.511454   54101 logs.go:282] 0 containers: []
	W1212 00:23:41.511461   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:41.511466   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:41.511523   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:41.560805   54101 cri.go:89] found id: ""
	I1212 00:23:41.560819   54101 logs.go:282] 0 containers: []
	W1212 00:23:41.560826   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:41.560831   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:41.560887   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:41.587388   54101 cri.go:89] found id: ""
	I1212 00:23:41.587402   54101 logs.go:282] 0 containers: []
	W1212 00:23:41.587408   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:41.587413   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:41.587469   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:41.611964   54101 cri.go:89] found id: ""
	I1212 00:23:41.611979   54101 logs.go:282] 0 containers: []
	W1212 00:23:41.611986   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:41.611991   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:41.612051   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:41.637582   54101 cri.go:89] found id: ""
	I1212 00:23:41.637595   54101 logs.go:282] 0 containers: []
	W1212 00:23:41.637601   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:41.637606   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:41.637662   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:41.660916   54101 cri.go:89] found id: ""
	I1212 00:23:41.660939   54101 logs.go:282] 0 containers: []
	W1212 00:23:41.660947   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:41.660955   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:41.660964   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:41.720148   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:41.720165   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:41.730670   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:41.730686   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:41.792978   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:41.784826   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:41.785364   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:41.786819   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:41.787322   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:41.788953   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:41.784826   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:41.785364   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:41.786819   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:41.787322   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:41.788953   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:41.792987   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:41.792997   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:41.853248   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:41.853264   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:44.384182   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:44.394508   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:44.394568   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:44.418597   54101 cri.go:89] found id: ""
	I1212 00:23:44.418612   54101 logs.go:282] 0 containers: []
	W1212 00:23:44.418619   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:44.418624   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:44.418681   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:44.443581   54101 cri.go:89] found id: ""
	I1212 00:23:44.443595   54101 logs.go:282] 0 containers: []
	W1212 00:23:44.443603   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:44.443608   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:44.443665   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:44.468881   54101 cri.go:89] found id: ""
	I1212 00:23:44.468895   54101 logs.go:282] 0 containers: []
	W1212 00:23:44.468902   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:44.468907   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:44.468965   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:44.493396   54101 cri.go:89] found id: ""
	I1212 00:23:44.493410   54101 logs.go:282] 0 containers: []
	W1212 00:23:44.493417   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:44.493422   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:44.493479   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:44.517484   54101 cri.go:89] found id: ""
	I1212 00:23:44.517498   54101 logs.go:282] 0 containers: []
	W1212 00:23:44.517505   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:44.517510   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:44.517570   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:44.550796   54101 cri.go:89] found id: ""
	I1212 00:23:44.550810   54101 logs.go:282] 0 containers: []
	W1212 00:23:44.550817   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:44.550822   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:44.550883   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:44.576925   54101 cri.go:89] found id: ""
	I1212 00:23:44.576938   54101 logs.go:282] 0 containers: []
	W1212 00:23:44.576946   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:44.576954   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:44.576964   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:44.589144   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:44.589160   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:44.657506   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:44.648963   13127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:44.649564   13127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:44.651341   13127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:44.651846   13127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:44.653593   13127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:44.648963   13127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:44.649564   13127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:44.651341   13127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:44.651846   13127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:44.653593   13127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:44.657515   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:44.657526   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:44.718495   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:44.718513   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:44.745494   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:44.745508   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:47.304216   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:47.314254   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:47.314318   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:47.339739   54101 cri.go:89] found id: ""
	I1212 00:23:47.339753   54101 logs.go:282] 0 containers: []
	W1212 00:23:47.339760   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:47.339766   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:47.339822   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:47.364136   54101 cri.go:89] found id: ""
	I1212 00:23:47.364150   54101 logs.go:282] 0 containers: []
	W1212 00:23:47.364157   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:47.364162   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:47.364226   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:47.387941   54101 cri.go:89] found id: ""
	I1212 00:23:47.387957   54101 logs.go:282] 0 containers: []
	W1212 00:23:47.387964   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:47.387969   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:47.388026   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:47.412100   54101 cri.go:89] found id: ""
	I1212 00:23:47.412114   54101 logs.go:282] 0 containers: []
	W1212 00:23:47.412121   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:47.412126   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:47.412187   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:47.437977   54101 cri.go:89] found id: ""
	I1212 00:23:47.437997   54101 logs.go:282] 0 containers: []
	W1212 00:23:47.438005   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:47.438011   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:47.438070   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:47.464751   54101 cri.go:89] found id: ""
	I1212 00:23:47.464765   54101 logs.go:282] 0 containers: []
	W1212 00:23:47.464772   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:47.464778   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:47.464834   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:47.492824   54101 cri.go:89] found id: ""
	I1212 00:23:47.492838   54101 logs.go:282] 0 containers: []
	W1212 00:23:47.492845   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:47.492853   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:47.492863   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:47.549187   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:47.549205   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:47.561345   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:47.561361   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:47.637229   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:47.628185   13238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:47.629565   13238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:47.630374   13238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:47.631980   13238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:47.632725   13238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:47.628185   13238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:47.629565   13238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:47.630374   13238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:47.631980   13238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:47.632725   13238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:47.637238   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:47.637249   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:47.700044   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:47.700063   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:50.232142   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:50.242326   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:50.242389   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:50.267337   54101 cri.go:89] found id: ""
	I1212 00:23:50.267351   54101 logs.go:282] 0 containers: []
	W1212 00:23:50.267359   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:50.267364   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:50.267424   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:50.294402   54101 cri.go:89] found id: ""
	I1212 00:23:50.294416   54101 logs.go:282] 0 containers: []
	W1212 00:23:50.294424   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:50.294428   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:50.294489   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:50.318907   54101 cri.go:89] found id: ""
	I1212 00:23:50.318921   54101 logs.go:282] 0 containers: []
	W1212 00:23:50.318928   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:50.318938   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:50.319041   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:50.344349   54101 cri.go:89] found id: ""
	I1212 00:23:50.344362   54101 logs.go:282] 0 containers: []
	W1212 00:23:50.344370   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:50.344375   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:50.344442   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:50.374529   54101 cri.go:89] found id: ""
	I1212 00:23:50.374543   54101 logs.go:282] 0 containers: []
	W1212 00:23:50.374550   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:50.374556   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:50.374612   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:50.400874   54101 cri.go:89] found id: ""
	I1212 00:23:50.400888   54101 logs.go:282] 0 containers: []
	W1212 00:23:50.400896   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:50.400903   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:50.400977   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:50.428510   54101 cri.go:89] found id: ""
	I1212 00:23:50.428525   54101 logs.go:282] 0 containers: []
	W1212 00:23:50.428533   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:50.428541   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:50.428553   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:50.455528   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:50.455545   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:50.510724   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:50.510743   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:50.521665   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:50.521681   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:50.611401   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:50.603277   13347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:50.603798   13347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:50.605445   13347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:50.605921   13347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:50.607608   13347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:50.603277   13347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:50.603798   13347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:50.605445   13347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:50.605921   13347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:50.607608   13347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:50.611411   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:50.611424   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:53.175490   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:53.185411   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:53.185474   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:53.209584   54101 cri.go:89] found id: ""
	I1212 00:23:53.209597   54101 logs.go:282] 0 containers: []
	W1212 00:23:53.209616   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:53.209628   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:53.209693   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:53.233686   54101 cri.go:89] found id: ""
	I1212 00:23:53.233700   54101 logs.go:282] 0 containers: []
	W1212 00:23:53.233707   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:53.233712   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:53.233774   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:53.257587   54101 cri.go:89] found id: ""
	I1212 00:23:53.257601   54101 logs.go:282] 0 containers: []
	W1212 00:23:53.257608   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:53.257613   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:53.257670   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:53.285867   54101 cri.go:89] found id: ""
	I1212 00:23:53.285880   54101 logs.go:282] 0 containers: []
	W1212 00:23:53.285887   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:53.285892   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:53.285947   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:53.312516   54101 cri.go:89] found id: ""
	I1212 00:23:53.312530   54101 logs.go:282] 0 containers: []
	W1212 00:23:53.312537   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:53.312541   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:53.312599   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:53.336425   54101 cri.go:89] found id: ""
	I1212 00:23:53.336445   54101 logs.go:282] 0 containers: []
	W1212 00:23:53.336452   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:53.336457   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:53.336514   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:53.360258   54101 cri.go:89] found id: ""
	I1212 00:23:53.360271   54101 logs.go:282] 0 containers: []
	W1212 00:23:53.360279   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:53.360287   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:53.360296   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:53.422643   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:53.422660   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:53.451682   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:53.451698   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:53.508302   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:53.508320   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:53.518839   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:53.518855   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:53.608163   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:53.599819   13452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:53.600615   13452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:53.602118   13452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:53.602666   13452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:53.604185   13452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:53.599819   13452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:53.600615   13452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:53.602118   13452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:53.602666   13452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:53.604185   13452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:56.109087   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:56.119165   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:56.119227   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:56.143243   54101 cri.go:89] found id: ""
	I1212 00:23:56.143256   54101 logs.go:282] 0 containers: []
	W1212 00:23:56.143263   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:56.143268   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:56.143326   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:56.168289   54101 cri.go:89] found id: ""
	I1212 00:23:56.168309   54101 logs.go:282] 0 containers: []
	W1212 00:23:56.168316   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:56.168321   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:56.168379   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:56.192149   54101 cri.go:89] found id: ""
	I1212 00:23:56.192163   54101 logs.go:282] 0 containers: []
	W1212 00:23:56.192172   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:56.192177   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:56.192238   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:56.216868   54101 cri.go:89] found id: ""
	I1212 00:23:56.216880   54101 logs.go:282] 0 containers: []
	W1212 00:23:56.216887   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:56.216892   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:56.216954   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:56.241928   54101 cri.go:89] found id: ""
	I1212 00:23:56.241941   54101 logs.go:282] 0 containers: []
	W1212 00:23:56.241951   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:56.241956   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:56.242011   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:56.265468   54101 cri.go:89] found id: ""
	I1212 00:23:56.265481   54101 logs.go:282] 0 containers: []
	W1212 00:23:56.265488   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:56.265493   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:56.265552   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:56.290530   54101 cri.go:89] found id: ""
	I1212 00:23:56.290544   54101 logs.go:282] 0 containers: []
	W1212 00:23:56.290551   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:56.290559   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:56.290569   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:56.345149   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:56.345167   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:56.355854   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:56.355869   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:56.418379   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:56.410553   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:56.411250   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:56.412854   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:56.413395   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:56.414621   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:56.410553   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:56.411250   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:56.412854   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:56.413395   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:56.414621   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:56.418389   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:56.418399   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:56.480524   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:56.480543   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:59.011832   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:59.022048   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:59.022108   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:59.046210   54101 cri.go:89] found id: ""
	I1212 00:23:59.046224   54101 logs.go:282] 0 containers: []
	W1212 00:23:59.046231   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:59.046236   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:59.046299   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:59.071192   54101 cri.go:89] found id: ""
	I1212 00:23:59.071206   54101 logs.go:282] 0 containers: []
	W1212 00:23:59.071213   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:59.071217   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:59.071278   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:59.095678   54101 cri.go:89] found id: ""
	I1212 00:23:59.095692   54101 logs.go:282] 0 containers: []
	W1212 00:23:59.095698   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:59.095703   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:59.095760   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:59.119812   54101 cri.go:89] found id: ""
	I1212 00:23:59.119825   54101 logs.go:282] 0 containers: []
	W1212 00:23:59.119832   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:59.119837   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:59.119897   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:59.143943   54101 cri.go:89] found id: ""
	I1212 00:23:59.143957   54101 logs.go:282] 0 containers: []
	W1212 00:23:59.143964   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:59.143969   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:59.144028   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:59.174483   54101 cri.go:89] found id: ""
	I1212 00:23:59.174506   54101 logs.go:282] 0 containers: []
	W1212 00:23:59.174513   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:59.174519   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:59.174576   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:59.202048   54101 cri.go:89] found id: ""
	I1212 00:23:59.202061   54101 logs.go:282] 0 containers: []
	W1212 00:23:59.202068   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:59.202076   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:59.202087   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:59.257143   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:59.257161   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:59.268235   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:59.268252   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:59.334149   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:59.326488   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:59.326882   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:59.328393   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:59.328789   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:59.330327   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:59.326488   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:59.326882   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:59.328393   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:59.328789   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:59.330327   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:59.334159   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:59.334184   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:59.396366   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:59.396383   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:01.926850   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:01.937253   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:01.937312   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:01.966272   54101 cri.go:89] found id: ""
	I1212 00:24:01.966286   54101 logs.go:282] 0 containers: []
	W1212 00:24:01.966293   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:01.966298   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:01.966359   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:01.991061   54101 cri.go:89] found id: ""
	I1212 00:24:01.991075   54101 logs.go:282] 0 containers: []
	W1212 00:24:01.991082   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:01.991087   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:01.991145   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:02.019646   54101 cri.go:89] found id: ""
	I1212 00:24:02.019661   54101 logs.go:282] 0 containers: []
	W1212 00:24:02.019668   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:02.019673   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:02.019731   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:02.044619   54101 cri.go:89] found id: ""
	I1212 00:24:02.044634   54101 logs.go:282] 0 containers: []
	W1212 00:24:02.044641   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:02.044648   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:02.044704   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:02.069486   54101 cri.go:89] found id: ""
	I1212 00:24:02.069500   54101 logs.go:282] 0 containers: []
	W1212 00:24:02.069508   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:02.069512   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:02.069569   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:02.096887   54101 cri.go:89] found id: ""
	I1212 00:24:02.096901   54101 logs.go:282] 0 containers: []
	W1212 00:24:02.096908   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:02.096913   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:02.096974   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:02.124826   54101 cri.go:89] found id: ""
	I1212 00:24:02.124839   54101 logs.go:282] 0 containers: []
	W1212 00:24:02.124847   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:02.124854   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:02.124864   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:02.152773   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:02.152789   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:02.210656   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:02.210676   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:02.222006   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:02.222022   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:02.293474   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:02.284427   13765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:02.285315   13765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:02.287050   13765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:02.287829   13765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:02.289483   13765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:02.284427   13765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:02.285315   13765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:02.287050   13765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:02.287829   13765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:02.289483   13765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:02.293484   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:02.293499   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:04.860582   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:04.870768   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:04.870829   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:04.896675   54101 cri.go:89] found id: ""
	I1212 00:24:04.896689   54101 logs.go:282] 0 containers: []
	W1212 00:24:04.896696   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:04.896701   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:04.896759   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:04.925636   54101 cri.go:89] found id: ""
	I1212 00:24:04.925651   54101 logs.go:282] 0 containers: []
	W1212 00:24:04.925658   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:04.925664   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:04.925730   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:04.950839   54101 cri.go:89] found id: ""
	I1212 00:24:04.950853   54101 logs.go:282] 0 containers: []
	W1212 00:24:04.950860   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:04.950865   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:04.950922   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:04.976777   54101 cri.go:89] found id: ""
	I1212 00:24:04.976792   54101 logs.go:282] 0 containers: []
	W1212 00:24:04.976799   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:04.976804   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:04.976862   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:05.007523   54101 cri.go:89] found id: ""
	I1212 00:24:05.007538   54101 logs.go:282] 0 containers: []
	W1212 00:24:05.007547   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:05.007552   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:05.007615   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:05.034390   54101 cri.go:89] found id: ""
	I1212 00:24:05.034412   54101 logs.go:282] 0 containers: []
	W1212 00:24:05.034419   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:05.034424   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:05.034492   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:05.060364   54101 cri.go:89] found id: ""
	I1212 00:24:05.060378   54101 logs.go:282] 0 containers: []
	W1212 00:24:05.060385   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:05.060394   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:05.060405   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:05.130824   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:05.122601   13853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:05.123172   13853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:05.124809   13853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:05.125287   13853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:05.126908   13853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:05.122601   13853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:05.123172   13853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:05.124809   13853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:05.125287   13853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:05.126908   13853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:05.130836   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:05.130846   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:05.193088   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:05.193106   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:05.221288   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:05.221305   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:05.280911   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:05.280928   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:07.791957   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:07.803197   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:07.803258   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:07.849866   54101 cri.go:89] found id: ""
	I1212 00:24:07.849879   54101 logs.go:282] 0 containers: []
	W1212 00:24:07.849885   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:07.849890   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:07.849944   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:07.879098   54101 cri.go:89] found id: ""
	I1212 00:24:07.879112   54101 logs.go:282] 0 containers: []
	W1212 00:24:07.879118   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:07.879123   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:07.879180   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:07.903042   54101 cri.go:89] found id: ""
	I1212 00:24:07.903056   54101 logs.go:282] 0 containers: []
	W1212 00:24:07.903063   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:07.903068   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:07.903124   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:07.926973   54101 cri.go:89] found id: ""
	I1212 00:24:07.926986   54101 logs.go:282] 0 containers: []
	W1212 00:24:07.927024   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:07.927029   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:07.927093   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:07.952849   54101 cri.go:89] found id: ""
	I1212 00:24:07.952863   54101 logs.go:282] 0 containers: []
	W1212 00:24:07.952870   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:07.952875   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:07.952937   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:07.976048   54101 cri.go:89] found id: ""
	I1212 00:24:07.976061   54101 logs.go:282] 0 containers: []
	W1212 00:24:07.976068   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:07.976073   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:07.976127   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:08.005144   54101 cri.go:89] found id: ""
	I1212 00:24:08.005157   54101 logs.go:282] 0 containers: []
	W1212 00:24:08.005165   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:08.005173   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:08.005183   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:08.062459   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:08.062477   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:08.073793   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:08.073821   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:08.140014   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:08.132203   13964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:08.132726   13964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:08.134246   13964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:08.134712   13964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:08.136200   13964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:08.132203   13964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:08.132726   13964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:08.134246   13964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:08.134712   13964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:08.136200   13964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:08.140025   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:08.140035   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:08.202051   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:08.202070   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:10.733798   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:10.743998   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:10.744057   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:10.768781   54101 cri.go:89] found id: ""
	I1212 00:24:10.768795   54101 logs.go:282] 0 containers: []
	W1212 00:24:10.768802   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:10.768807   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:10.768871   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:10.811478   54101 cri.go:89] found id: ""
	I1212 00:24:10.811492   54101 logs.go:282] 0 containers: []
	W1212 00:24:10.811499   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:10.811504   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:10.811570   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:10.842339   54101 cri.go:89] found id: ""
	I1212 00:24:10.842358   54101 logs.go:282] 0 containers: []
	W1212 00:24:10.842365   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:10.842370   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:10.842431   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:10.874129   54101 cri.go:89] found id: ""
	I1212 00:24:10.874143   54101 logs.go:282] 0 containers: []
	W1212 00:24:10.874151   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:10.874157   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:10.874217   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:10.898217   54101 cri.go:89] found id: ""
	I1212 00:24:10.898231   54101 logs.go:282] 0 containers: []
	W1212 00:24:10.898244   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:10.898249   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:10.898306   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:10.923360   54101 cri.go:89] found id: ""
	I1212 00:24:10.923374   54101 logs.go:282] 0 containers: []
	W1212 00:24:10.923380   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:10.923385   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:10.923442   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:10.947605   54101 cri.go:89] found id: ""
	I1212 00:24:10.947619   54101 logs.go:282] 0 containers: []
	W1212 00:24:10.947626   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:10.947634   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:10.947645   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:11.006969   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:11.006995   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:11.018264   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:11.018281   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:11.082660   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:11.073705   14067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:11.074224   14067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:11.075940   14067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:11.076685   14067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:11.078178   14067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:11.073705   14067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:11.074224   14067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:11.075940   14067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:11.076685   14067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:11.078178   14067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:11.082671   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:11.082681   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:11.144246   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:11.144263   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:13.671933   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:13.683185   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:13.683253   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:13.708906   54101 cri.go:89] found id: ""
	I1212 00:24:13.708920   54101 logs.go:282] 0 containers: []
	W1212 00:24:13.708927   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:13.708932   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:13.709070   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:13.733465   54101 cri.go:89] found id: ""
	I1212 00:24:13.733479   54101 logs.go:282] 0 containers: []
	W1212 00:24:13.733486   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:13.733491   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:13.733555   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:13.757055   54101 cri.go:89] found id: ""
	I1212 00:24:13.757069   54101 logs.go:282] 0 containers: []
	W1212 00:24:13.757076   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:13.757084   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:13.757142   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:13.781588   54101 cri.go:89] found id: ""
	I1212 00:24:13.781602   54101 logs.go:282] 0 containers: []
	W1212 00:24:13.781609   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:13.781614   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:13.781674   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:13.811312   54101 cri.go:89] found id: ""
	I1212 00:24:13.811325   54101 logs.go:282] 0 containers: []
	W1212 00:24:13.811333   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:13.811337   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:13.811394   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:13.844313   54101 cri.go:89] found id: ""
	I1212 00:24:13.844326   54101 logs.go:282] 0 containers: []
	W1212 00:24:13.844333   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:13.844338   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:13.844421   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:13.868420   54101 cri.go:89] found id: ""
	I1212 00:24:13.868434   54101 logs.go:282] 0 containers: []
	W1212 00:24:13.868441   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:13.868449   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:13.868459   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:13.923519   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:13.923536   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:13.934615   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:13.934631   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:14.000483   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:13.989816   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:13.990515   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:13.992025   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:13.992486   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:13.995350   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:13.989816   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:13.990515   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:13.992025   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:13.992486   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:13.995350   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:14.000493   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:14.000505   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:14.063145   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:14.063165   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:16.593154   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:16.603519   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:16.603584   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:16.632576   54101 cri.go:89] found id: ""
	I1212 00:24:16.632589   54101 logs.go:282] 0 containers: []
	W1212 00:24:16.632596   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:16.632603   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:16.632663   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:16.661504   54101 cri.go:89] found id: ""
	I1212 00:24:16.661518   54101 logs.go:282] 0 containers: []
	W1212 00:24:16.661525   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:16.661530   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:16.661587   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:16.686915   54101 cri.go:89] found id: ""
	I1212 00:24:16.686930   54101 logs.go:282] 0 containers: []
	W1212 00:24:16.686937   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:16.686942   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:16.687035   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:16.711579   54101 cri.go:89] found id: ""
	I1212 00:24:16.711594   54101 logs.go:282] 0 containers: []
	W1212 00:24:16.711601   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:16.711606   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:16.711664   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:16.735976   54101 cri.go:89] found id: ""
	I1212 00:24:16.735990   54101 logs.go:282] 0 containers: []
	W1212 00:24:16.735998   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:16.736003   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:16.736058   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:16.760337   54101 cri.go:89] found id: ""
	I1212 00:24:16.760351   54101 logs.go:282] 0 containers: []
	W1212 00:24:16.760359   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:16.760364   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:16.760429   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:16.787594   54101 cri.go:89] found id: ""
	I1212 00:24:16.787608   54101 logs.go:282] 0 containers: []
	W1212 00:24:16.787625   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:16.787634   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:16.787644   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:16.853787   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:16.853805   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:16.865402   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:16.865418   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:16.934251   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:16.925653   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:16.926416   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:16.928097   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:16.928745   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:16.930355   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:16.925653   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:16.926416   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:16.928097   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:16.928745   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:16.930355   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:16.934261   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:16.934272   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:16.995335   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:16.995360   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:19.530311   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:19.540648   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:19.540711   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:19.573854   54101 cri.go:89] found id: ""
	I1212 00:24:19.573868   54101 logs.go:282] 0 containers: []
	W1212 00:24:19.573875   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:19.573880   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:19.573938   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:19.598830   54101 cri.go:89] found id: ""
	I1212 00:24:19.598850   54101 logs.go:282] 0 containers: []
	W1212 00:24:19.598857   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:19.598862   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:19.598965   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:19.624335   54101 cri.go:89] found id: ""
	I1212 00:24:19.624349   54101 logs.go:282] 0 containers: []
	W1212 00:24:19.624357   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:19.624364   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:19.624451   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:19.650800   54101 cri.go:89] found id: ""
	I1212 00:24:19.650813   54101 logs.go:282] 0 containers: []
	W1212 00:24:19.650820   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:19.650826   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:19.650887   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:19.676025   54101 cri.go:89] found id: ""
	I1212 00:24:19.676038   54101 logs.go:282] 0 containers: []
	W1212 00:24:19.676046   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:19.676051   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:19.676111   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:19.702971   54101 cri.go:89] found id: ""
	I1212 00:24:19.702984   54101 logs.go:282] 0 containers: []
	W1212 00:24:19.703003   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:19.703008   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:19.703066   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:19.727517   54101 cri.go:89] found id: ""
	I1212 00:24:19.727530   54101 logs.go:282] 0 containers: []
	W1212 00:24:19.727537   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:19.727545   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:19.727558   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:19.784930   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:19.784948   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:19.799325   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:19.799340   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:19.872030   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:19.864278   14381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:19.865037   14381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:19.866546   14381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:19.866841   14381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:19.868283   14381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:19.864278   14381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:19.865037   14381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:19.866546   14381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:19.866841   14381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:19.868283   14381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:19.872041   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:19.872052   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:19.934549   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:19.934568   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:22.466009   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:22.476227   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:22.476288   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:22.501678   54101 cri.go:89] found id: ""
	I1212 00:24:22.501705   54101 logs.go:282] 0 containers: []
	W1212 00:24:22.501712   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:22.501717   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:22.501785   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:22.531238   54101 cri.go:89] found id: ""
	I1212 00:24:22.531251   54101 logs.go:282] 0 containers: []
	W1212 00:24:22.531258   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:22.531263   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:22.531321   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:22.554936   54101 cri.go:89] found id: ""
	I1212 00:24:22.554949   54101 logs.go:282] 0 containers: []
	W1212 00:24:22.554956   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:22.554962   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:22.555055   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:22.582980   54101 cri.go:89] found id: ""
	I1212 00:24:22.583017   54101 logs.go:282] 0 containers: []
	W1212 00:24:22.583025   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:22.583030   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:22.583094   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:22.608038   54101 cri.go:89] found id: ""
	I1212 00:24:22.608051   54101 logs.go:282] 0 containers: []
	W1212 00:24:22.608069   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:22.608074   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:22.608134   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:22.631929   54101 cri.go:89] found id: ""
	I1212 00:24:22.631942   54101 logs.go:282] 0 containers: []
	W1212 00:24:22.631959   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:22.631965   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:22.632035   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:22.660069   54101 cri.go:89] found id: ""
	I1212 00:24:22.660083   54101 logs.go:282] 0 containers: []
	W1212 00:24:22.660090   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:22.660107   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:22.660118   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:22.722675   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:22.714219   14476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:22.714970   14476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:22.716604   14476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:22.716888   14476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:22.718358   14476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:22.714219   14476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:22.714970   14476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:22.716604   14476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:22.716888   14476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:22.718358   14476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:22.722685   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:22.722695   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:22.783718   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:22.783736   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:22.815064   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:22.815082   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:22.876099   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:22.876117   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:25.389270   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:25.399208   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:25.399264   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:25.423023   54101 cri.go:89] found id: ""
	I1212 00:24:25.423036   54101 logs.go:282] 0 containers: []
	W1212 00:24:25.423043   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:25.423048   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:25.423110   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:25.447118   54101 cri.go:89] found id: ""
	I1212 00:24:25.447132   54101 logs.go:282] 0 containers: []
	W1212 00:24:25.447140   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:25.447145   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:25.447203   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:25.471506   54101 cri.go:89] found id: ""
	I1212 00:24:25.471520   54101 logs.go:282] 0 containers: []
	W1212 00:24:25.471527   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:25.471532   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:25.471588   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:25.496289   54101 cri.go:89] found id: ""
	I1212 00:24:25.496302   54101 logs.go:282] 0 containers: []
	W1212 00:24:25.496310   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:25.496315   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:25.496371   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:25.521055   54101 cri.go:89] found id: ""
	I1212 00:24:25.521068   54101 logs.go:282] 0 containers: []
	W1212 00:24:25.521075   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:25.521080   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:25.521136   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:25.545427   54101 cri.go:89] found id: ""
	I1212 00:24:25.545441   54101 logs.go:282] 0 containers: []
	W1212 00:24:25.545448   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:25.545453   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:25.545509   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:25.573059   54101 cri.go:89] found id: ""
	I1212 00:24:25.573073   54101 logs.go:282] 0 containers: []
	W1212 00:24:25.573080   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:25.573088   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:25.573098   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:25.627642   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:25.627661   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:25.638176   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:25.638192   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:25.702262   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:25.692958   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:25.693521   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:25.695662   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:25.696870   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:25.697262   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:25.692958   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:25.693521   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:25.695662   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:25.696870   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:25.697262   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:25.702271   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:25.702283   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:25.768032   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:25.768050   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:28.306236   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:28.316297   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:28.316366   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:28.339825   54101 cri.go:89] found id: ""
	I1212 00:24:28.339838   54101 logs.go:282] 0 containers: []
	W1212 00:24:28.339855   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:28.339860   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:28.339930   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:28.364813   54101 cri.go:89] found id: ""
	I1212 00:24:28.364826   54101 logs.go:282] 0 containers: []
	W1212 00:24:28.364832   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:28.364837   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:28.364902   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:28.398903   54101 cri.go:89] found id: ""
	I1212 00:24:28.398917   54101 logs.go:282] 0 containers: []
	W1212 00:24:28.398923   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:28.398928   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:28.398985   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:28.424563   54101 cri.go:89] found id: ""
	I1212 00:24:28.424577   54101 logs.go:282] 0 containers: []
	W1212 00:24:28.424584   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:28.424595   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:28.424652   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:28.448511   54101 cri.go:89] found id: ""
	I1212 00:24:28.448524   54101 logs.go:282] 0 containers: []
	W1212 00:24:28.448531   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:28.448536   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:28.448595   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:28.473282   54101 cri.go:89] found id: ""
	I1212 00:24:28.473295   54101 logs.go:282] 0 containers: []
	W1212 00:24:28.473303   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:28.473308   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:28.473364   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:28.496850   54101 cri.go:89] found id: ""
	I1212 00:24:28.496864   54101 logs.go:282] 0 containers: []
	W1212 00:24:28.496871   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:28.496879   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:28.496889   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:28.563054   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:28.554678   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:28.555432   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:28.557227   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:28.557770   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:28.559159   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:28.554678   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:28.555432   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:28.557227   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:28.557770   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:28.559159   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:28.563064   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:28.563076   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:28.625015   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:28.625034   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:28.656873   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:28.656887   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:28.714792   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:28.714811   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:31.225710   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:31.235567   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:31.235633   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:31.259473   54101 cri.go:89] found id: ""
	I1212 00:24:31.259487   54101 logs.go:282] 0 containers: []
	W1212 00:24:31.259494   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:31.259499   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:31.259556   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:31.284058   54101 cri.go:89] found id: ""
	I1212 00:24:31.284070   54101 logs.go:282] 0 containers: []
	W1212 00:24:31.284077   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:31.284082   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:31.284138   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:31.306894   54101 cri.go:89] found id: ""
	I1212 00:24:31.306907   54101 logs.go:282] 0 containers: []
	W1212 00:24:31.306914   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:31.306918   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:31.306978   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:31.334534   54101 cri.go:89] found id: ""
	I1212 00:24:31.334547   54101 logs.go:282] 0 containers: []
	W1212 00:24:31.334554   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:31.334559   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:31.334615   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:31.359236   54101 cri.go:89] found id: ""
	I1212 00:24:31.359250   54101 logs.go:282] 0 containers: []
	W1212 00:24:31.359258   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:31.359263   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:31.359321   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:31.383234   54101 cri.go:89] found id: ""
	I1212 00:24:31.383247   54101 logs.go:282] 0 containers: []
	W1212 00:24:31.383254   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:31.383259   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:31.383314   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:31.407612   54101 cri.go:89] found id: ""
	I1212 00:24:31.407624   54101 logs.go:282] 0 containers: []
	W1212 00:24:31.407631   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:31.407638   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:31.407650   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:31.470123   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:31.470142   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:31.497215   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:31.497231   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:31.553428   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:31.553445   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:31.564292   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:31.564307   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:31.630782   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:31.622216   14808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:31.622767   14808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:31.624650   14808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:31.625108   14808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:31.626798   14808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:31.622216   14808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:31.622767   14808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:31.624650   14808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:31.625108   14808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:31.626798   14808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:34.131141   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:34.141238   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:34.141296   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:34.166032   54101 cri.go:89] found id: ""
	I1212 00:24:34.166045   54101 logs.go:282] 0 containers: []
	W1212 00:24:34.166053   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:34.166057   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:34.166117   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:34.192065   54101 cri.go:89] found id: ""
	I1212 00:24:34.192079   54101 logs.go:282] 0 containers: []
	W1212 00:24:34.192086   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:34.192091   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:34.192146   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:34.216626   54101 cri.go:89] found id: ""
	I1212 00:24:34.216640   54101 logs.go:282] 0 containers: []
	W1212 00:24:34.216646   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:34.216652   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:34.216710   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:34.244975   54101 cri.go:89] found id: ""
	I1212 00:24:34.244989   54101 logs.go:282] 0 containers: []
	W1212 00:24:34.244997   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:34.245002   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:34.245058   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:34.269781   54101 cri.go:89] found id: ""
	I1212 00:24:34.269795   54101 logs.go:282] 0 containers: []
	W1212 00:24:34.269802   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:34.269807   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:34.269867   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:34.294651   54101 cri.go:89] found id: ""
	I1212 00:24:34.294664   54101 logs.go:282] 0 containers: []
	W1212 00:24:34.294672   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:34.294677   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:34.294740   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:34.319772   54101 cri.go:89] found id: ""
	I1212 00:24:34.319786   54101 logs.go:282] 0 containers: []
	W1212 00:24:34.319793   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:34.319801   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:34.319811   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:34.385955   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:34.377894   14892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:34.378715   14892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:34.380217   14892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:34.380694   14892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:34.382158   14892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:34.377894   14892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:34.378715   14892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:34.380217   14892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:34.380694   14892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:34.382158   14892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:34.385966   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:34.385976   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:34.451474   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:34.451493   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:34.478755   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:34.478770   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:34.538195   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:34.538217   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:37.049062   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:37.060494   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:37.060558   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:37.096756   54101 cri.go:89] found id: ""
	I1212 00:24:37.096769   54101 logs.go:282] 0 containers: []
	W1212 00:24:37.096776   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:37.096781   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:37.096857   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:37.123426   54101 cri.go:89] found id: ""
	I1212 00:24:37.123441   54101 logs.go:282] 0 containers: []
	W1212 00:24:37.123448   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:37.123453   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:37.123515   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:37.150366   54101 cri.go:89] found id: ""
	I1212 00:24:37.150379   54101 logs.go:282] 0 containers: []
	W1212 00:24:37.150387   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:37.150392   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:37.150455   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:37.176266   54101 cri.go:89] found id: ""
	I1212 00:24:37.176281   54101 logs.go:282] 0 containers: []
	W1212 00:24:37.176288   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:37.176293   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:37.176379   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:37.211184   54101 cri.go:89] found id: ""
	I1212 00:24:37.211198   54101 logs.go:282] 0 containers: []
	W1212 00:24:37.211205   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:37.211210   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:37.211278   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:37.235978   54101 cri.go:89] found id: ""
	I1212 00:24:37.235992   54101 logs.go:282] 0 containers: []
	W1212 00:24:37.235999   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:37.236005   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:37.236064   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:37.261068   54101 cri.go:89] found id: ""
	I1212 00:24:37.261082   54101 logs.go:282] 0 containers: []
	W1212 00:24:37.261089   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:37.261097   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:37.261107   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:37.318643   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:37.318661   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:37.329758   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:37.329780   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:37.396581   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:37.388347   15001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:37.388766   15001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:37.390448   15001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:37.390869   15001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:37.392485   15001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:37.388347   15001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:37.388766   15001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:37.390448   15001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:37.390869   15001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:37.392485   15001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:37.396591   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:37.396602   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:37.463371   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:37.463399   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:39.999532   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:40.021164   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:40.021239   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:40.055893   54101 cri.go:89] found id: ""
	I1212 00:24:40.055908   54101 logs.go:282] 0 containers: []
	W1212 00:24:40.055916   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:40.055921   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:40.055984   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:40.085805   54101 cri.go:89] found id: ""
	I1212 00:24:40.085821   54101 logs.go:282] 0 containers: []
	W1212 00:24:40.085831   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:40.085837   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:40.085902   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:40.113784   54101 cri.go:89] found id: ""
	I1212 00:24:40.113797   54101 logs.go:282] 0 containers: []
	W1212 00:24:40.113804   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:40.113809   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:40.113867   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:40.141930   54101 cri.go:89] found id: ""
	I1212 00:24:40.141945   54101 logs.go:282] 0 containers: []
	W1212 00:24:40.141954   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:40.141959   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:40.142018   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:40.168489   54101 cri.go:89] found id: ""
	I1212 00:24:40.168503   54101 logs.go:282] 0 containers: []
	W1212 00:24:40.168510   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:40.168515   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:40.168575   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:40.195479   54101 cri.go:89] found id: ""
	I1212 00:24:40.195494   54101 logs.go:282] 0 containers: []
	W1212 00:24:40.195501   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:40.195506   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:40.195572   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:40.225277   54101 cri.go:89] found id: ""
	I1212 00:24:40.225290   54101 logs.go:282] 0 containers: []
	W1212 00:24:40.225297   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:40.225305   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:40.225315   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:40.288821   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:40.280605   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:40.281157   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:40.282725   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:40.283252   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:40.284776   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:40.280605   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:40.281157   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:40.282725   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:40.283252   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:40.284776   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:40.288833   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:40.288842   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:40.351250   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:40.351269   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:40.379379   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:40.379395   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:40.435768   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:40.435785   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:42.948581   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:42.958923   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:42.958983   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:42.983729   54101 cri.go:89] found id: ""
	I1212 00:24:42.983743   54101 logs.go:282] 0 containers: []
	W1212 00:24:42.983757   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:42.983762   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:42.983823   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:43.015682   54101 cri.go:89] found id: ""
	I1212 00:24:43.015696   54101 logs.go:282] 0 containers: []
	W1212 00:24:43.015703   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:43.015708   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:43.015767   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:43.051631   54101 cri.go:89] found id: ""
	I1212 00:24:43.051644   54101 logs.go:282] 0 containers: []
	W1212 00:24:43.051658   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:43.051662   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:43.051723   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:43.088521   54101 cri.go:89] found id: ""
	I1212 00:24:43.088535   54101 logs.go:282] 0 containers: []
	W1212 00:24:43.088542   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:43.088547   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:43.088606   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:43.120828   54101 cri.go:89] found id: ""
	I1212 00:24:43.120842   54101 logs.go:282] 0 containers: []
	W1212 00:24:43.120848   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:43.120854   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:43.120916   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:43.146768   54101 cri.go:89] found id: ""
	I1212 00:24:43.146782   54101 logs.go:282] 0 containers: []
	W1212 00:24:43.146789   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:43.146794   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:43.146877   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:43.172067   54101 cri.go:89] found id: ""
	I1212 00:24:43.172081   54101 logs.go:282] 0 containers: []
	W1212 00:24:43.172089   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:43.172097   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:43.172107   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:43.183115   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:43.183131   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:43.245564   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:43.237027   15207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:43.237641   15207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:43.239314   15207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:43.239878   15207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:43.241570   15207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:43.237027   15207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:43.237641   15207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:43.239314   15207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:43.239878   15207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:43.241570   15207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:43.245574   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:43.245585   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:43.307071   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:43.307092   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:43.334124   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:43.334141   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:45.892688   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:45.902643   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:45.902701   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:45.927418   54101 cri.go:89] found id: ""
	I1212 00:24:45.927432   54101 logs.go:282] 0 containers: []
	W1212 00:24:45.927439   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:45.927444   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:45.927504   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:45.950969   54101 cri.go:89] found id: ""
	I1212 00:24:45.950982   54101 logs.go:282] 0 containers: []
	W1212 00:24:45.951005   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:45.951011   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:45.951068   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:45.977037   54101 cri.go:89] found id: ""
	I1212 00:24:45.977050   54101 logs.go:282] 0 containers: []
	W1212 00:24:45.977057   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:45.977062   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:45.977127   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:46.003570   54101 cri.go:89] found id: ""
	I1212 00:24:46.003587   54101 logs.go:282] 0 containers: []
	W1212 00:24:46.003594   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:46.003600   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:46.003668   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:46.035920   54101 cri.go:89] found id: ""
	I1212 00:24:46.035934   54101 logs.go:282] 0 containers: []
	W1212 00:24:46.035941   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:46.035946   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:46.036003   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:46.073828   54101 cri.go:89] found id: ""
	I1212 00:24:46.073842   54101 logs.go:282] 0 containers: []
	W1212 00:24:46.073849   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:46.073854   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:46.073911   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:46.106173   54101 cri.go:89] found id: ""
	I1212 00:24:46.106194   54101 logs.go:282] 0 containers: []
	W1212 00:24:46.106218   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:46.106226   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:46.106239   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:46.162624   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:46.162643   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:46.173580   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:46.173602   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:46.238544   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:46.230296   15317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:46.230879   15317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:46.232549   15317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:46.233036   15317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:46.234601   15317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:46.230296   15317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:46.230879   15317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:46.232549   15317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:46.233036   15317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:46.234601   15317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:46.238555   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:46.238566   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:46.301177   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:46.301195   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:48.831063   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:48.843168   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:48.843226   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:48.871581   54101 cri.go:89] found id: ""
	I1212 00:24:48.871598   54101 logs.go:282] 0 containers: []
	W1212 00:24:48.871605   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:48.871610   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:48.871669   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:48.896221   54101 cri.go:89] found id: ""
	I1212 00:24:48.896236   54101 logs.go:282] 0 containers: []
	W1212 00:24:48.896244   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:48.896249   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:48.896307   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:48.920455   54101 cri.go:89] found id: ""
	I1212 00:24:48.920475   54101 logs.go:282] 0 containers: []
	W1212 00:24:48.920483   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:48.920488   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:48.920550   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:48.944730   54101 cri.go:89] found id: ""
	I1212 00:24:48.944743   54101 logs.go:282] 0 containers: []
	W1212 00:24:48.944750   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:48.944755   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:48.944815   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:48.969159   54101 cri.go:89] found id: ""
	I1212 00:24:48.969172   54101 logs.go:282] 0 containers: []
	W1212 00:24:48.969179   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:48.969184   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:48.969238   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:49.001344   54101 cri.go:89] found id: ""
	I1212 00:24:49.001360   54101 logs.go:282] 0 containers: []
	W1212 00:24:49.001368   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:49.001373   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:49.001440   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:49.026664   54101 cri.go:89] found id: ""
	I1212 00:24:49.026688   54101 logs.go:282] 0 containers: []
	W1212 00:24:49.026696   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:49.026704   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:49.026715   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:49.088266   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:49.088284   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:49.099424   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:49.099438   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:49.166422   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:49.157832   15425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:49.158583   15425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:49.160190   15425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:49.160890   15425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:49.162624   15425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:49.157832   15425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:49.158583   15425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:49.160190   15425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:49.160890   15425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:49.162624   15425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:49.166432   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:49.166445   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:49.227337   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:49.227355   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:51.758903   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:51.768725   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:51.768786   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:51.792403   54101 cri.go:89] found id: ""
	I1212 00:24:51.792417   54101 logs.go:282] 0 containers: []
	W1212 00:24:51.792424   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:51.792429   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:51.792497   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:51.819996   54101 cri.go:89] found id: ""
	I1212 00:24:51.820010   54101 logs.go:282] 0 containers: []
	W1212 00:24:51.820016   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:51.820021   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:51.820080   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:51.844706   54101 cri.go:89] found id: ""
	I1212 00:24:51.844719   54101 logs.go:282] 0 containers: []
	W1212 00:24:51.844727   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:51.844732   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:51.844800   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:51.870289   54101 cri.go:89] found id: ""
	I1212 00:24:51.870303   54101 logs.go:282] 0 containers: []
	W1212 00:24:51.870316   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:51.870321   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:51.870378   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:51.894116   54101 cri.go:89] found id: ""
	I1212 00:24:51.894129   54101 logs.go:282] 0 containers: []
	W1212 00:24:51.894137   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:51.894142   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:51.894200   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:51.918453   54101 cri.go:89] found id: ""
	I1212 00:24:51.918467   54101 logs.go:282] 0 containers: []
	W1212 00:24:51.918474   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:51.918480   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:51.918538   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:51.942207   54101 cri.go:89] found id: ""
	I1212 00:24:51.942220   54101 logs.go:282] 0 containers: []
	W1212 00:24:51.942228   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:51.942235   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:51.942245   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:51.970818   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:51.970835   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:52.026675   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:52.026692   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:52.044175   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:52.044191   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:52.123266   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:52.114940   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:52.115962   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:52.117604   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:52.118040   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:52.119539   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:52.114940   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:52.115962   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:52.117604   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:52.118040   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:52.119539   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:52.123275   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:52.123286   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:54.689949   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:54.700000   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:54.700070   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:54.725625   54101 cri.go:89] found id: ""
	I1212 00:24:54.725638   54101 logs.go:282] 0 containers: []
	W1212 00:24:54.725645   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:54.725650   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:54.725716   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:54.748579   54101 cri.go:89] found id: ""
	I1212 00:24:54.748592   54101 logs.go:282] 0 containers: []
	W1212 00:24:54.748600   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:54.748604   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:54.748661   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:54.772796   54101 cri.go:89] found id: ""
	I1212 00:24:54.772809   54101 logs.go:282] 0 containers: []
	W1212 00:24:54.772816   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:54.772821   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:54.772876   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:54.797082   54101 cri.go:89] found id: ""
	I1212 00:24:54.797095   54101 logs.go:282] 0 containers: []
	W1212 00:24:54.797102   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:54.797107   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:54.797168   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:54.821359   54101 cri.go:89] found id: ""
	I1212 00:24:54.821372   54101 logs.go:282] 0 containers: []
	W1212 00:24:54.821379   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:54.821384   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:54.821441   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:54.848911   54101 cri.go:89] found id: ""
	I1212 00:24:54.848924   54101 logs.go:282] 0 containers: []
	W1212 00:24:54.848931   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:54.848936   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:54.848993   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:54.872383   54101 cri.go:89] found id: ""
	I1212 00:24:54.872397   54101 logs.go:282] 0 containers: []
	W1212 00:24:54.872404   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:54.872412   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:54.872422   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:54.927404   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:54.927423   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:54.938083   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:54.938099   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:55.013009   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:54.998953   15629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:55.000234   15629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:55.001265   15629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:55.004572   15629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:55.007712   15629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:54.998953   15629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:55.000234   15629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:55.001265   15629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:55.004572   15629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:55.007712   15629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:55.013021   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:55.013032   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:55.084355   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:55.084375   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:57.624991   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:57.635207   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:57.635270   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:57.662282   54101 cri.go:89] found id: ""
	I1212 00:24:57.662296   54101 logs.go:282] 0 containers: []
	W1212 00:24:57.662304   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:57.662309   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:57.662365   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:57.692048   54101 cri.go:89] found id: ""
	I1212 00:24:57.692061   54101 logs.go:282] 0 containers: []
	W1212 00:24:57.692068   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:57.692073   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:57.692128   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:57.717665   54101 cri.go:89] found id: ""
	I1212 00:24:57.717679   54101 logs.go:282] 0 containers: []
	W1212 00:24:57.717686   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:57.717692   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:57.717752   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:57.746206   54101 cri.go:89] found id: ""
	I1212 00:24:57.746219   54101 logs.go:282] 0 containers: []
	W1212 00:24:57.746226   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:57.746233   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:57.746291   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:57.772883   54101 cri.go:89] found id: ""
	I1212 00:24:57.772896   54101 logs.go:282] 0 containers: []
	W1212 00:24:57.772904   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:57.772909   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:57.772969   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:57.796550   54101 cri.go:89] found id: ""
	I1212 00:24:57.796564   54101 logs.go:282] 0 containers: []
	W1212 00:24:57.796571   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:57.796576   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:57.796636   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:57.819457   54101 cri.go:89] found id: ""
	I1212 00:24:57.819470   54101 logs.go:282] 0 containers: []
	W1212 00:24:57.819481   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:57.819489   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:57.819499   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:57.848789   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:57.848804   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:57.903379   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:57.903404   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:57.914134   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:57.914150   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:57.981734   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:57.973800   15746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:57.974813   15746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:57.975633   15746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:57.976681   15746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:57.977401   15746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:57.973800   15746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:57.974813   15746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:57.975633   15746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:57.976681   15746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:57.977401   15746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:57.981743   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:57.981764   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:00.548466   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:00.559868   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:00.559941   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:00.588355   54101 cri.go:89] found id: ""
	I1212 00:25:00.588369   54101 logs.go:282] 0 containers: []
	W1212 00:25:00.588377   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:00.588383   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:00.588446   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:00.615059   54101 cri.go:89] found id: ""
	I1212 00:25:00.615073   54101 logs.go:282] 0 containers: []
	W1212 00:25:00.615080   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:00.615085   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:00.615144   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:00.642285   54101 cri.go:89] found id: ""
	I1212 00:25:00.642299   54101 logs.go:282] 0 containers: []
	W1212 00:25:00.642307   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:00.642312   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:00.642370   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:00.670680   54101 cri.go:89] found id: ""
	I1212 00:25:00.670693   54101 logs.go:282] 0 containers: []
	W1212 00:25:00.670701   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:00.670706   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:00.670766   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:00.696244   54101 cri.go:89] found id: ""
	I1212 00:25:00.696258   54101 logs.go:282] 0 containers: []
	W1212 00:25:00.696266   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:00.696271   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:00.696386   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:00.725727   54101 cri.go:89] found id: ""
	I1212 00:25:00.725741   54101 logs.go:282] 0 containers: []
	W1212 00:25:00.725758   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:00.725764   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:00.725844   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:00.754004   54101 cri.go:89] found id: ""
	I1212 00:25:00.754018   54101 logs.go:282] 0 containers: []
	W1212 00:25:00.754025   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:00.754032   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:00.754044   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:00.766092   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:00.766108   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:00.830876   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:00.822487   15839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:00.823145   15839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:00.824701   15839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:00.825291   15839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:00.826797   15839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:00.822487   15839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:00.823145   15839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:00.824701   15839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:00.825291   15839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:00.826797   15839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:00.830886   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:00.830899   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:00.893247   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:00.893265   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:00.920729   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:00.920744   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:03.481388   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:03.491775   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:03.491838   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:03.521216   54101 cri.go:89] found id: ""
	I1212 00:25:03.521230   54101 logs.go:282] 0 containers: []
	W1212 00:25:03.521238   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:03.521243   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:03.521304   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:03.549226   54101 cri.go:89] found id: ""
	I1212 00:25:03.549240   54101 logs.go:282] 0 containers: []
	W1212 00:25:03.549247   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:03.549258   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:03.549315   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:03.577069   54101 cri.go:89] found id: ""
	I1212 00:25:03.577083   54101 logs.go:282] 0 containers: []
	W1212 00:25:03.577090   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:03.577097   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:03.577156   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:03.606566   54101 cri.go:89] found id: ""
	I1212 00:25:03.606580   54101 logs.go:282] 0 containers: []
	W1212 00:25:03.606587   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:03.606592   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:03.606652   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:03.631034   54101 cri.go:89] found id: ""
	I1212 00:25:03.631049   54101 logs.go:282] 0 containers: []
	W1212 00:25:03.631057   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:03.631062   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:03.631125   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:03.655850   54101 cri.go:89] found id: ""
	I1212 00:25:03.655864   54101 logs.go:282] 0 containers: []
	W1212 00:25:03.655871   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:03.655876   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:03.655951   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:03.682159   54101 cri.go:89] found id: ""
	I1212 00:25:03.682173   54101 logs.go:282] 0 containers: []
	W1212 00:25:03.682180   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:03.682187   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:03.682200   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:03.692956   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:03.692973   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:03.759732   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:03.751026   15944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:03.751692   15944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:03.753437   15944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:03.754061   15944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:03.755694   15944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:03.751026   15944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:03.751692   15944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:03.753437   15944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:03.754061   15944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:03.755694   15944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:03.759743   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:03.759754   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:03.821448   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:03.821467   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:03.854174   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:03.854191   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:06.412785   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:06.423128   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:06.423192   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:06.451062   54101 cri.go:89] found id: ""
	I1212 00:25:06.451075   54101 logs.go:282] 0 containers: []
	W1212 00:25:06.451082   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:06.451087   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:06.451145   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:06.476861   54101 cri.go:89] found id: ""
	I1212 00:25:06.476875   54101 logs.go:282] 0 containers: []
	W1212 00:25:06.476882   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:06.476888   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:06.476956   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:06.502250   54101 cri.go:89] found id: ""
	I1212 00:25:06.502277   54101 logs.go:282] 0 containers: []
	W1212 00:25:06.502284   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:06.502295   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:06.502363   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:06.527789   54101 cri.go:89] found id: ""
	I1212 00:25:06.527803   54101 logs.go:282] 0 containers: []
	W1212 00:25:06.527810   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:06.527816   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:06.527876   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:06.552928   54101 cri.go:89] found id: ""
	I1212 00:25:06.552942   54101 logs.go:282] 0 containers: []
	W1212 00:25:06.552950   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:06.552956   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:06.553015   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:06.580455   54101 cri.go:89] found id: ""
	I1212 00:25:06.580468   54101 logs.go:282] 0 containers: []
	W1212 00:25:06.580475   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:06.580481   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:06.580541   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:06.605618   54101 cri.go:89] found id: ""
	I1212 00:25:06.605632   54101 logs.go:282] 0 containers: []
	W1212 00:25:06.605640   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:06.605656   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:06.605667   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:06.661856   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:06.661873   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:06.673040   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:06.673057   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:06.744531   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:06.737026   16050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:06.737431   16050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:06.738919   16050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:06.739260   16050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:06.740703   16050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:06.737026   16050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:06.737431   16050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:06.738919   16050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:06.739260   16050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:06.740703   16050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:06.744541   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:06.744552   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:06.810963   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:06.810982   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:09.340882   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:09.351148   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:09.351207   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:09.376060   54101 cri.go:89] found id: ""
	I1212 00:25:09.376074   54101 logs.go:282] 0 containers: []
	W1212 00:25:09.376081   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:09.376086   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:09.376144   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:09.401509   54101 cri.go:89] found id: ""
	I1212 00:25:09.401524   54101 logs.go:282] 0 containers: []
	W1212 00:25:09.401532   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:09.401537   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:09.401594   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:09.430682   54101 cri.go:89] found id: ""
	I1212 00:25:09.430697   54101 logs.go:282] 0 containers: []
	W1212 00:25:09.430704   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:09.430709   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:09.430779   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:09.455570   54101 cri.go:89] found id: ""
	I1212 00:25:09.455583   54101 logs.go:282] 0 containers: []
	W1212 00:25:09.455590   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:09.455596   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:09.455652   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:09.480221   54101 cri.go:89] found id: ""
	I1212 00:25:09.480234   54101 logs.go:282] 0 containers: []
	W1212 00:25:09.480251   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:09.480257   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:09.480312   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:09.504553   54101 cri.go:89] found id: ""
	I1212 00:25:09.504566   54101 logs.go:282] 0 containers: []
	W1212 00:25:09.504573   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:09.504578   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:09.504634   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:09.529091   54101 cri.go:89] found id: ""
	I1212 00:25:09.529105   54101 logs.go:282] 0 containers: []
	W1212 00:25:09.529111   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:09.529119   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:09.529129   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:09.590147   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:09.590169   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:09.616705   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:09.616720   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:09.674296   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:09.674314   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:09.685008   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:09.685023   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:09.747995   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:09.740039   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:09.740945   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:09.742442   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:09.742752   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:09.744216   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:09.740039   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:09.740945   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:09.742442   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:09.742752   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:09.744216   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:12.248240   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:12.258577   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:12.258636   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:12.296410   54101 cri.go:89] found id: ""
	I1212 00:25:12.296425   54101 logs.go:282] 0 containers: []
	W1212 00:25:12.296432   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:12.296438   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:12.296495   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:12.322054   54101 cri.go:89] found id: ""
	I1212 00:25:12.322069   54101 logs.go:282] 0 containers: []
	W1212 00:25:12.322076   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:12.322081   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:12.322137   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:12.354557   54101 cri.go:89] found id: ""
	I1212 00:25:12.354570   54101 logs.go:282] 0 containers: []
	W1212 00:25:12.354577   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:12.354582   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:12.354643   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:12.379214   54101 cri.go:89] found id: ""
	I1212 00:25:12.379228   54101 logs.go:282] 0 containers: []
	W1212 00:25:12.379235   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:12.379240   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:12.379297   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:12.403239   54101 cri.go:89] found id: ""
	I1212 00:25:12.403253   54101 logs.go:282] 0 containers: []
	W1212 00:25:12.403261   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:12.403266   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:12.403325   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:12.429024   54101 cri.go:89] found id: ""
	I1212 00:25:12.429039   54101 logs.go:282] 0 containers: []
	W1212 00:25:12.429052   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:12.429058   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:12.429117   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:12.454240   54101 cri.go:89] found id: ""
	I1212 00:25:12.454253   54101 logs.go:282] 0 containers: []
	W1212 00:25:12.454260   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:12.454268   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:12.454279   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:12.465168   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:12.465185   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:12.530196   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:12.522373   16258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:12.522762   16258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:12.524330   16258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:12.524677   16258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:12.526171   16258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:12.522373   16258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:12.522762   16258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:12.524330   16258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:12.524677   16258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:12.526171   16258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:12.530207   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:12.530218   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:12.596659   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:12.596686   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:12.629646   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:12.629666   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:15.188117   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:15.198184   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:15.198246   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:15.222760   54101 cri.go:89] found id: ""
	I1212 00:25:15.222774   54101 logs.go:282] 0 containers: []
	W1212 00:25:15.222781   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:15.222786   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:15.222841   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:15.247134   54101 cri.go:89] found id: ""
	I1212 00:25:15.247149   54101 logs.go:282] 0 containers: []
	W1212 00:25:15.247156   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:15.247161   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:15.247220   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:15.273493   54101 cri.go:89] found id: ""
	I1212 00:25:15.273506   54101 logs.go:282] 0 containers: []
	W1212 00:25:15.273513   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:15.273518   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:15.273575   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:15.325769   54101 cri.go:89] found id: ""
	I1212 00:25:15.325782   54101 logs.go:282] 0 containers: []
	W1212 00:25:15.325790   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:15.325794   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:15.325851   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:15.352564   54101 cri.go:89] found id: ""
	I1212 00:25:15.352578   54101 logs.go:282] 0 containers: []
	W1212 00:25:15.352589   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:15.352594   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:15.352652   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:15.381006   54101 cri.go:89] found id: ""
	I1212 00:25:15.381025   54101 logs.go:282] 0 containers: []
	W1212 00:25:15.381032   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:15.381037   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:15.381094   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:15.404889   54101 cri.go:89] found id: ""
	I1212 00:25:15.404903   54101 logs.go:282] 0 containers: []
	W1212 00:25:15.404910   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:15.404917   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:15.404936   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:15.472619   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:15.464098   16363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:15.465350   16363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:15.466018   16363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:15.467674   16363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:15.468107   16363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:15.464098   16363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:15.465350   16363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:15.466018   16363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:15.467674   16363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:15.468107   16363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:15.472631   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:15.472643   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:15.533279   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:15.533297   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:15.563170   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:15.563185   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:15.622483   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:15.622499   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:18.135301   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:18.145599   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:18.145657   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:18.170223   54101 cri.go:89] found id: ""
	I1212 00:25:18.170237   54101 logs.go:282] 0 containers: []
	W1212 00:25:18.170245   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:18.170250   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:18.170317   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:18.194981   54101 cri.go:89] found id: ""
	I1212 00:25:18.195034   54101 logs.go:282] 0 containers: []
	W1212 00:25:18.195042   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:18.195047   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:18.195107   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:18.219741   54101 cri.go:89] found id: ""
	I1212 00:25:18.219754   54101 logs.go:282] 0 containers: []
	W1212 00:25:18.219762   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:18.219767   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:18.219836   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:18.244023   54101 cri.go:89] found id: ""
	I1212 00:25:18.244036   54101 logs.go:282] 0 containers: []
	W1212 00:25:18.244043   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:18.244048   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:18.244105   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:18.268830   54101 cri.go:89] found id: ""
	I1212 00:25:18.268844   54101 logs.go:282] 0 containers: []
	W1212 00:25:18.268852   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:18.268857   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:18.268920   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:18.308533   54101 cri.go:89] found id: ""
	I1212 00:25:18.308547   54101 logs.go:282] 0 containers: []
	W1212 00:25:18.308553   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:18.308558   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:18.308618   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:18.342407   54101 cri.go:89] found id: ""
	I1212 00:25:18.342420   54101 logs.go:282] 0 containers: []
	W1212 00:25:18.342426   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:18.342434   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:18.342444   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:18.411629   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:18.403777   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:18.404392   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:18.405943   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:18.406371   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:18.407842   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:18.403777   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:18.404392   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:18.405943   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:18.406371   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:18.407842   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:18.411640   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:18.411652   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:18.476356   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:18.476375   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:18.508597   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:18.508613   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:18.565071   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:18.565088   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:21.075765   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:21.087124   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:21.087190   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:21.116451   54101 cri.go:89] found id: ""
	I1212 00:25:21.116465   54101 logs.go:282] 0 containers: []
	W1212 00:25:21.116472   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:21.116477   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:21.116540   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:21.142594   54101 cri.go:89] found id: ""
	I1212 00:25:21.142607   54101 logs.go:282] 0 containers: []
	W1212 00:25:21.142615   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:21.142620   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:21.142678   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:21.167624   54101 cri.go:89] found id: ""
	I1212 00:25:21.167638   54101 logs.go:282] 0 containers: []
	W1212 00:25:21.167646   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:21.167651   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:21.167709   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:21.195907   54101 cri.go:89] found id: ""
	I1212 00:25:21.195921   54101 logs.go:282] 0 containers: []
	W1212 00:25:21.195927   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:21.195932   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:21.195987   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:21.220794   54101 cri.go:89] found id: ""
	I1212 00:25:21.220808   54101 logs.go:282] 0 containers: []
	W1212 00:25:21.220816   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:21.220821   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:21.220880   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:21.246438   54101 cri.go:89] found id: ""
	I1212 00:25:21.246451   54101 logs.go:282] 0 containers: []
	W1212 00:25:21.246462   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:21.246473   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:21.246531   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:21.271784   54101 cri.go:89] found id: ""
	I1212 00:25:21.271799   54101 logs.go:282] 0 containers: []
	W1212 00:25:21.271806   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:21.271814   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:21.271833   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:21.315787   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:21.315812   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:21.377319   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:21.377338   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:21.388870   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:21.388885   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:21.453883   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:21.444432   16587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:21.445344   16587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:21.446969   16587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:21.447534   16587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:21.449241   16587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:21.444432   16587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:21.445344   16587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:21.446969   16587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:21.447534   16587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:21.449241   16587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:21.453893   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:21.453904   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:24.019730   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:24.030732   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:24.030792   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:24.057384   54101 cri.go:89] found id: ""
	I1212 00:25:24.057397   54101 logs.go:282] 0 containers: []
	W1212 00:25:24.057404   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:24.057410   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:24.057467   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:24.087868   54101 cri.go:89] found id: ""
	I1212 00:25:24.087883   54101 logs.go:282] 0 containers: []
	W1212 00:25:24.087891   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:24.087896   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:24.087960   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:24.112813   54101 cri.go:89] found id: ""
	I1212 00:25:24.112827   54101 logs.go:282] 0 containers: []
	W1212 00:25:24.112835   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:24.112840   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:24.112900   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:24.141527   54101 cri.go:89] found id: ""
	I1212 00:25:24.141541   54101 logs.go:282] 0 containers: []
	W1212 00:25:24.141548   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:24.141553   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:24.141612   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:24.171422   54101 cri.go:89] found id: ""
	I1212 00:25:24.171436   54101 logs.go:282] 0 containers: []
	W1212 00:25:24.171444   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:24.171449   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:24.171506   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:24.196733   54101 cri.go:89] found id: ""
	I1212 00:25:24.196758   54101 logs.go:282] 0 containers: []
	W1212 00:25:24.196767   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:24.196772   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:24.196840   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:24.221142   54101 cri.go:89] found id: ""
	I1212 00:25:24.221163   54101 logs.go:282] 0 containers: []
	W1212 00:25:24.221170   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:24.221178   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:24.221188   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:24.280043   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:24.280061   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:24.294333   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:24.294347   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:24.376651   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:24.368398   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:24.368936   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:24.370665   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:24.371218   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:24.372749   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:24.368398   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:24.368936   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:24.370665   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:24.371218   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:24.372749   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:24.376660   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:24.376670   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:24.442437   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:24.442455   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:26.972180   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:26.982717   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:26.982778   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:27.016302   54101 cri.go:89] found id: ""
	I1212 00:25:27.016317   54101 logs.go:282] 0 containers: []
	W1212 00:25:27.016324   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:27.016329   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:27.016390   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:27.041562   54101 cri.go:89] found id: ""
	I1212 00:25:27.041576   54101 logs.go:282] 0 containers: []
	W1212 00:25:27.041583   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:27.041588   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:27.041647   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:27.067288   54101 cri.go:89] found id: ""
	I1212 00:25:27.067301   54101 logs.go:282] 0 containers: []
	W1212 00:25:27.067308   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:27.067313   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:27.067370   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:27.093958   54101 cri.go:89] found id: ""
	I1212 00:25:27.093978   54101 logs.go:282] 0 containers: []
	W1212 00:25:27.093985   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:27.093990   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:27.094046   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:27.119290   54101 cri.go:89] found id: ""
	I1212 00:25:27.119303   54101 logs.go:282] 0 containers: []
	W1212 00:25:27.119310   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:27.119321   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:27.119378   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:27.147433   54101 cri.go:89] found id: ""
	I1212 00:25:27.147446   54101 logs.go:282] 0 containers: []
	W1212 00:25:27.147452   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:27.147457   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:27.147513   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:27.172138   54101 cri.go:89] found id: ""
	I1212 00:25:27.172152   54101 logs.go:282] 0 containers: []
	W1212 00:25:27.172159   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:27.172167   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:27.172177   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:27.228777   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:27.228797   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:27.240006   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:27.240021   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:27.317423   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:27.308478   16785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:27.309592   16785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:27.311317   16785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:27.311656   16785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:27.313135   16785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:27.308478   16785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:27.309592   16785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:27.311317   16785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:27.311656   16785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:27.313135   16785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:27.317433   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:27.317444   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:27.386770   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:27.386790   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:29.918004   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:29.928163   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:29.928225   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:29.957041   54101 cri.go:89] found id: ""
	I1212 00:25:29.957055   54101 logs.go:282] 0 containers: []
	W1212 00:25:29.957062   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:29.957067   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:29.957124   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:29.982223   54101 cri.go:89] found id: ""
	I1212 00:25:29.982237   54101 logs.go:282] 0 containers: []
	W1212 00:25:29.982244   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:29.982249   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:29.982306   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:30.021601   54101 cri.go:89] found id: ""
	I1212 00:25:30.021616   54101 logs.go:282] 0 containers: []
	W1212 00:25:30.021625   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:30.021630   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:30.021707   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:30.065430   54101 cri.go:89] found id: ""
	I1212 00:25:30.065447   54101 logs.go:282] 0 containers: []
	W1212 00:25:30.065456   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:30.065462   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:30.065547   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:30.094609   54101 cri.go:89] found id: ""
	I1212 00:25:30.094623   54101 logs.go:282] 0 containers: []
	W1212 00:25:30.094630   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:30.094635   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:30.094695   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:30.122604   54101 cri.go:89] found id: ""
	I1212 00:25:30.122618   54101 logs.go:282] 0 containers: []
	W1212 00:25:30.122626   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:30.122631   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:30.122690   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:30.148645   54101 cri.go:89] found id: ""
	I1212 00:25:30.148659   54101 logs.go:282] 0 containers: []
	W1212 00:25:30.148667   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:30.148675   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:30.148685   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:30.206432   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:30.206452   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:30.218454   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:30.218469   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:30.284319   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:30.274262   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:30.275194   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:30.276848   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:30.277482   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:30.278689   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:30.274262   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:30.275194   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:30.276848   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:30.277482   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:30.278689   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:30.284328   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:30.284339   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:30.356346   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:30.356372   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:32.883437   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:32.893868   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:32.893927   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:32.917839   54101 cri.go:89] found id: ""
	I1212 00:25:32.917852   54101 logs.go:282] 0 containers: []
	W1212 00:25:32.917859   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:32.917865   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:32.917931   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:32.942885   54101 cri.go:89] found id: ""
	I1212 00:25:32.942899   54101 logs.go:282] 0 containers: []
	W1212 00:25:32.942906   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:32.942911   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:32.942974   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:32.968519   54101 cri.go:89] found id: ""
	I1212 00:25:32.968532   54101 logs.go:282] 0 containers: []
	W1212 00:25:32.968539   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:32.968544   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:32.968602   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:33.004343   54101 cri.go:89] found id: ""
	I1212 00:25:33.004357   54101 logs.go:282] 0 containers: []
	W1212 00:25:33.004365   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:33.004370   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:33.004440   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:33.033496   54101 cri.go:89] found id: ""
	I1212 00:25:33.033510   54101 logs.go:282] 0 containers: []
	W1212 00:25:33.033524   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:33.033530   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:33.033590   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:33.061868   54101 cri.go:89] found id: ""
	I1212 00:25:33.061890   54101 logs.go:282] 0 containers: []
	W1212 00:25:33.061898   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:33.061903   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:33.061969   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:33.088616   54101 cri.go:89] found id: ""
	I1212 00:25:33.088630   54101 logs.go:282] 0 containers: []
	W1212 00:25:33.088637   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:33.088645   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:33.088655   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:33.144882   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:33.144899   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:33.156391   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:33.156407   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:33.220404   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:33.211436   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:33.212144   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:33.214079   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:33.214925   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:33.216614   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:33.211436   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:33.212144   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:33.214079   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:33.214925   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:33.216614   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:33.220413   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:33.220424   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:33.291732   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:33.291751   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:35.829535   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:35.839412   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:35.839479   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:35.864609   54101 cri.go:89] found id: ""
	I1212 00:25:35.864629   54101 logs.go:282] 0 containers: []
	W1212 00:25:35.864639   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:35.864644   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:35.864705   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:35.888220   54101 cri.go:89] found id: ""
	I1212 00:25:35.888234   54101 logs.go:282] 0 containers: []
	W1212 00:25:35.888241   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:35.888245   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:35.888304   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:35.911726   54101 cri.go:89] found id: ""
	I1212 00:25:35.911739   54101 logs.go:282] 0 containers: []
	W1212 00:25:35.911746   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:35.911751   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:35.911812   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:35.937495   54101 cri.go:89] found id: ""
	I1212 00:25:35.937510   54101 logs.go:282] 0 containers: []
	W1212 00:25:35.937517   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:35.937522   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:35.937578   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:35.962276   54101 cri.go:89] found id: ""
	I1212 00:25:35.962290   54101 logs.go:282] 0 containers: []
	W1212 00:25:35.962296   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:35.962301   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:35.962360   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:35.985962   54101 cri.go:89] found id: ""
	I1212 00:25:35.985981   54101 logs.go:282] 0 containers: []
	W1212 00:25:35.985989   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:35.985994   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:35.986056   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:36.012853   54101 cri.go:89] found id: ""
	I1212 00:25:36.012867   54101 logs.go:282] 0 containers: []
	W1212 00:25:36.012875   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:36.012882   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:36.012895   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:36.069296   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:36.069315   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:36.080983   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:36.081000   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:36.149041   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:36.139864   17098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:36.140552   17098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:36.142366   17098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:36.143049   17098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:36.144891   17098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:36.139864   17098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:36.140552   17098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:36.142366   17098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:36.143049   17098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:36.144891   17098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:36.149053   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:36.149064   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:36.210509   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:36.210528   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:38.743061   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:38.752984   54101 kubeadm.go:602] duration metric: took 4m3.726857079s to restartPrimaryControlPlane
	W1212 00:25:38.753047   54101 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 00:25:38.753120   54101 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1212 00:25:39.158817   54101 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:25:39.172695   54101 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:25:39.181725   54101 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:25:39.181785   54101 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:25:39.189823   54101 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:25:39.189833   54101 kubeadm.go:158] found existing configuration files:
	
	I1212 00:25:39.189882   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 00:25:39.197507   54101 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 00:25:39.197568   54101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 00:25:39.206290   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 00:25:39.215918   54101 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 00:25:39.215979   54101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:25:39.224009   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 00:25:39.231677   54101 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 00:25:39.231744   54101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:25:39.239027   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 00:25:39.246759   54101 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 00:25:39.246820   54101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:25:39.254322   54101 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:25:39.294892   54101 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 00:25:39.294976   54101 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 00:25:39.369123   54101 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:25:39.369186   54101 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 00:25:39.369220   54101 kubeadm.go:319] OS: Linux
	I1212 00:25:39.369264   54101 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 00:25:39.369311   54101 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 00:25:39.369356   54101 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 00:25:39.369403   54101 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 00:25:39.369450   54101 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 00:25:39.369496   54101 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 00:25:39.369541   54101 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 00:25:39.369587   54101 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 00:25:39.369632   54101 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 00:25:39.438649   54101 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:25:39.438759   54101 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:25:39.438849   54101 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 00:25:39.447406   54101 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:25:39.452683   54101 out.go:252]   - Generating certificates and keys ...
	I1212 00:25:39.452767   54101 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 00:25:39.452831   54101 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 00:25:39.452906   54101 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 00:25:39.452965   54101 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 00:25:39.453033   54101 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 00:25:39.453085   54101 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 00:25:39.453148   54101 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 00:25:39.453208   54101 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 00:25:39.453281   54101 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 00:25:39.453353   54101 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 00:25:39.453389   54101 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 00:25:39.453445   54101 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:25:39.710711   54101 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:25:40.209307   54101 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 00:25:40.334299   54101 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:25:40.657582   54101 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:25:40.893171   54101 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:25:40.893926   54101 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:25:40.896489   54101 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:25:40.899767   54101 out.go:252]   - Booting up control plane ...
	I1212 00:25:40.899871   54101 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:25:40.899953   54101 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:25:40.900236   54101 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:25:40.921621   54101 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:25:40.921722   54101 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 00:25:40.928629   54101 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 00:25:40.928898   54101 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:25:40.928939   54101 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 00:25:41.061713   54101 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 00:25:41.061825   54101 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 00:29:41.062316   54101 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001026811s
	I1212 00:29:41.062606   54101 kubeadm.go:319] 
	I1212 00:29:41.062683   54101 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 00:29:41.062716   54101 kubeadm.go:319] 	- The kubelet is not running
	I1212 00:29:41.062821   54101 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 00:29:41.062826   54101 kubeadm.go:319] 
	I1212 00:29:41.062929   54101 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 00:29:41.062960   54101 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 00:29:41.063008   54101 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 00:29:41.063012   54101 kubeadm.go:319] 
	I1212 00:29:41.067208   54101 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 00:29:41.067622   54101 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 00:29:41.067731   54101 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:29:41.067994   54101 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 00:29:41.067998   54101 kubeadm.go:319] 
	I1212 00:29:41.068065   54101 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1212 00:29:41.068164   54101 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001026811s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 00:29:41.068252   54101 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1212 00:29:41.482759   54101 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:29:41.496287   54101 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:29:41.496351   54101 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:29:41.504378   54101 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:29:41.504387   54101 kubeadm.go:158] found existing configuration files:
	
	I1212 00:29:41.504442   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 00:29:41.512585   54101 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 00:29:41.512640   54101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 00:29:41.520530   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 00:29:41.528262   54101 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 00:29:41.528318   54101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:29:41.536111   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 00:29:41.543998   54101 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 00:29:41.544056   54101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:29:41.551686   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 00:29:41.559774   54101 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 00:29:41.559831   54101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:29:41.567115   54101 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:29:41.604105   54101 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 00:29:41.604156   54101 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 00:29:41.681810   54101 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:29:41.681880   54101 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 00:29:41.681919   54101 kubeadm.go:319] OS: Linux
	I1212 00:29:41.681969   54101 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 00:29:41.682023   54101 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 00:29:41.682069   54101 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 00:29:41.682134   54101 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 00:29:41.682195   54101 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 00:29:41.682256   54101 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 00:29:41.682310   54101 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 00:29:41.682358   54101 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 00:29:41.682410   54101 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 00:29:41.751743   54101 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:29:41.751870   54101 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:29:41.751978   54101 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 00:29:41.757399   54101 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:29:41.762811   54101 out.go:252]   - Generating certificates and keys ...
	I1212 00:29:41.762902   54101 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 00:29:41.762969   54101 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 00:29:41.763059   54101 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 00:29:41.763119   54101 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 00:29:41.763187   54101 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 00:29:41.763239   54101 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 00:29:41.763301   54101 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 00:29:41.763361   54101 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 00:29:41.763434   54101 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 00:29:41.763505   54101 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 00:29:41.763542   54101 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 00:29:41.763596   54101 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:29:42.025181   54101 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:29:42.229266   54101 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 00:29:42.409579   54101 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:29:42.479383   54101 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:29:43.146782   54101 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:29:43.147428   54101 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:29:43.150122   54101 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:29:43.153470   54101 out.go:252]   - Booting up control plane ...
	I1212 00:29:43.153571   54101 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:29:43.153647   54101 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:29:43.153712   54101 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:29:43.174954   54101 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:29:43.175084   54101 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 00:29:43.182722   54101 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 00:29:43.183334   54101 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:29:43.183511   54101 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 00:29:43.327482   54101 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 00:29:43.327594   54101 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 00:33:43.326577   54101 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001134626s
	I1212 00:33:43.326601   54101 kubeadm.go:319] 
	I1212 00:33:43.326657   54101 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 00:33:43.326688   54101 kubeadm.go:319] 	- The kubelet is not running
	I1212 00:33:43.326791   54101 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 00:33:43.326796   54101 kubeadm.go:319] 
	I1212 00:33:43.326899   54101 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 00:33:43.326930   54101 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 00:33:43.326959   54101 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 00:33:43.326962   54101 kubeadm.go:319] 
	I1212 00:33:43.331146   54101 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 00:33:43.331567   54101 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 00:33:43.331673   54101 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:33:43.331909   54101 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 00:33:43.331913   54101 kubeadm.go:319] 
	I1212 00:33:43.331980   54101 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 00:33:43.332070   54101 kubeadm.go:403] duration metric: took 12m8.353678295s to StartCluster
	I1212 00:33:43.332098   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:33:43.332159   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:33:43.356905   54101 cri.go:89] found id: ""
	I1212 00:33:43.356919   54101 logs.go:282] 0 containers: []
	W1212 00:33:43.356925   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:33:43.356930   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:33:43.356985   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:33:43.381448   54101 cri.go:89] found id: ""
	I1212 00:33:43.381464   54101 logs.go:282] 0 containers: []
	W1212 00:33:43.381471   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:33:43.381477   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:33:43.381541   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:33:43.409467   54101 cri.go:89] found id: ""
	I1212 00:33:43.409480   54101 logs.go:282] 0 containers: []
	W1212 00:33:43.409487   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:33:43.409492   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:33:43.409550   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:33:43.434352   54101 cri.go:89] found id: ""
	I1212 00:33:43.434367   54101 logs.go:282] 0 containers: []
	W1212 00:33:43.434375   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:33:43.434381   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:33:43.434439   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:33:43.458566   54101 cri.go:89] found id: ""
	I1212 00:33:43.458581   54101 logs.go:282] 0 containers: []
	W1212 00:33:43.458588   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:33:43.458593   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:33:43.458661   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:33:43.482646   54101 cri.go:89] found id: ""
	I1212 00:33:43.482660   54101 logs.go:282] 0 containers: []
	W1212 00:33:43.482667   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:33:43.482672   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:33:43.482728   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:33:43.507433   54101 cri.go:89] found id: ""
	I1212 00:33:43.507445   54101 logs.go:282] 0 containers: []
	W1212 00:33:43.507452   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:33:43.507461   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:33:43.507472   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:33:43.575281   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:33:43.567196   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:33:43.568177   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:33:43.569762   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:33:43.570292   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:33:43.571460   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:33:43.567196   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:33:43.568177   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:33:43.569762   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:33:43.570292   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:33:43.571460   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:33:43.575296   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:33:43.575305   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:33:43.637567   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:33:43.637585   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:33:43.665505   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:33:43.665520   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:33:43.723897   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:33:43.723913   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1212 00:33:43.734646   54101 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001134626s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 00:33:43.734686   54101 out.go:285] * 
	W1212 00:33:43.734800   54101 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001134626s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 00:33:43.734860   54101 out.go:285] * 
	W1212 00:33:43.737311   54101 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:33:43.743292   54101 out.go:203] 
	W1212 00:33:43.746156   54101 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001134626s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 00:33:43.746395   54101 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 00:33:43.746473   54101 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 00:33:43.751052   54101 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.272455867Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.272520269Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.272625542Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.272714248Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.272776665Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.272836596Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.272893384Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.272958435Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.273027211Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.273124763Z" level=info msg="Connect containerd service"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.273469529Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.274122072Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.287381427Z" level=info msg="Start subscribing containerd event"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.287554622Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.287703153Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.287625047Z" level=info msg="Start recovering state"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.327211013Z" level=info msg="Start event monitor"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.327399462Z" level=info msg="Start cni network conf syncer for default"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.327470929Z" level=info msg="Start streaming server"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.327536341Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.327597642Z" level=info msg="runtime interface starting up..."
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.327652682Z" level=info msg="starting plugins..."
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.327716215Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.327919745Z" level=info msg="containerd successfully booted in 0.080422s"
	Dec 12 00:21:33 functional-767012 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:33:44.948958   21009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:33:44.949396   21009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:33:44.950955   21009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:33:44.951507   21009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:33:44.953007   21009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec11 23:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014465] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.504479] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.038126] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.726220] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.947343] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 00:33:44 up  1:16,  0 user,  load average: 0.02, 0.14, 0.33
	Linux functional-767012 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 00:33:41 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:33:42 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 12 00:33:42 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:33:42 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:33:42 functional-767012 kubelet[20815]: E1212 00:33:42.576726   20815 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:33:42 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:33:42 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:33:43 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 12 00:33:43 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:33:43 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:33:43 functional-767012 kubelet[20821]: E1212 00:33:43.331930   20821 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:33:43 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:33:43 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:33:44 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 12 00:33:44 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:33:44 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:33:44 functional-767012 kubelet[20918]: E1212 00:33:44.087560   20918 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:33:44 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:33:44 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:33:44 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 12 00:33:44 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:33:44 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:33:44 functional-767012 kubelet[20983]: E1212 00:33:44.839218   20983 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:33:44 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:33:44 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-767012 -n functional-767012
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-767012 -n functional-767012: exit status 2 (370.996778ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-767012" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (735.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-767012 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-767012 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (59.593626ms)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-767012 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-767012
helpers_test.go:244: (dbg) docker inspect functional-767012:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e",
	        "Created": "2025-12-12T00:06:52.261765556Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42951,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T00:06:52.317917194Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/hostname",
	        "HostsPath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/hosts",
	        "LogPath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e-json.log",
	        "Name": "/functional-767012",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-767012:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-767012",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e",
	                "LowerDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70-init/diff:/var/lib/docker/overlay2/4c0e5370e4fd7b4e6c6a79620ef377d7d55826709cd277e0cfa49c6005af0314/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-767012",
	                "Source": "/var/lib/docker/volumes/functional-767012/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-767012",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-767012",
	                "name.minikube.sigs.k8s.io": "functional-767012",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e781257da3adf1d3284ab2a6de0168c3db7957f25a7e53d0015250294193762d",
	            "SandboxKey": "/var/run/docker/netns/e781257da3ad",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-767012": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:4d:78:ba:7d:83",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "83467cc4cb13818b98ec0d7cb5fc0064ea6eb2c8db4256a8a81330921aa2d9a4",
	                    "EndpointID": "b787b732d8d748776ceeb6e65fab51cc1e79758446bc85ac20043b35593fab12",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-767012",
	                        "6585a82fe5e6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-767012 -n functional-767012
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-767012 -n functional-767012: exit status 2 (324.072958ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-095481 image ls --format short --alsologtostderr                                                                                             │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image   │ functional-095481 image ls --format table --alsologtostderr                                                                                             │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image   │ functional-095481 image ls --format json --alsologtostderr                                                                                              │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ ssh     │ functional-095481 ssh pgrep buildkitd                                                                                                                   │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │                     │
	│ image   │ functional-095481 image build -t localhost/my-image:functional-095481 testdata/build --alsologtostderr                                                  │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ image   │ functional-095481 image ls                                                                                                                              │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ delete  │ -p functional-095481                                                                                                                                    │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
	│ start   │ -p functional-767012 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │                     │
	│ start   │ -p functional-767012 --alsologtostderr -v=8                                                                                                             │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:15 UTC │                     │
	│ cache   │ functional-767012 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ cache   │ functional-767012 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ cache   │ functional-767012 cache add registry.k8s.io/pause:latest                                                                                                │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ cache   │ functional-767012 cache add minikube-local-cache-test:functional-767012                                                                                 │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ cache   │ functional-767012 cache delete minikube-local-cache-test:functional-767012                                                                              │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ cache   │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ ssh     │ functional-767012 ssh sudo crictl images                                                                                                                │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ ssh     │ functional-767012 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ ssh     │ functional-767012 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │                     │
	│ cache   │ functional-767012 cache reload                                                                                                                          │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ ssh     │ functional-767012 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ kubectl │ functional-767012 kubectl -- --context functional-767012 get pods                                                                                       │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │                     │
	│ start   │ -p functional-767012 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:21:30
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:21:30.554245   54101 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:21:30.554345   54101 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:21:30.554348   54101 out.go:374] Setting ErrFile to fd 2...
	I1212 00:21:30.554353   54101 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:21:30.554677   54101 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	I1212 00:21:30.555164   54101 out.go:368] Setting JSON to false
	I1212 00:21:30.555965   54101 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3837,"bootTime":1765495054,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1212 00:21:30.556051   54101 start.go:143] virtualization:  
	I1212 00:21:30.559689   54101 out.go:179] * [functional-767012] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 00:21:30.562867   54101 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:21:30.562960   54101 notify.go:221] Checking for updates...
	I1212 00:21:30.566618   54101 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:21:30.569772   54101 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 00:21:30.572750   54101 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	I1212 00:21:30.576169   54101 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:21:30.579060   54101 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:21:30.582404   54101 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 00:21:30.582492   54101 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:21:30.621591   54101 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 00:21:30.621756   54101 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:21:30.683145   54101 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-12 00:21:30.674181767 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:21:30.683240   54101 docker.go:319] overlay module found
	I1212 00:21:30.688118   54101 out.go:179] * Using the docker driver based on existing profile
	I1212 00:21:30.690961   54101 start.go:309] selected driver: docker
	I1212 00:21:30.690971   54101 start.go:927] validating driver "docker" against &{Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:21:30.691125   54101 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:21:30.691237   54101 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:21:30.747846   54101 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-12 00:21:30.73816398 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:21:30.748230   54101 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:21:30.748252   54101 cni.go:84] Creating CNI manager for ""
	I1212 00:21:30.748298   54101 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 00:21:30.748340   54101 start.go:353] cluster config:
	{Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:21:30.751463   54101 out.go:179] * Starting "functional-767012" primary control-plane node in "functional-767012" cluster
	I1212 00:21:30.754231   54101 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1212 00:21:30.757160   54101 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 00:21:30.760119   54101 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 00:21:30.760160   54101 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1212 00:21:30.760168   54101 cache.go:65] Caching tarball of preloaded images
	I1212 00:21:30.760193   54101 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 00:21:30.760258   54101 preload.go:238] Found /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1212 00:21:30.760267   54101 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1212 00:21:30.760383   54101 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/config.json ...
	I1212 00:21:30.778906   54101 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 00:21:30.778917   54101 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 00:21:30.778938   54101 cache.go:243] Successfully downloaded all kic artifacts
	I1212 00:21:30.778968   54101 start.go:360] acquireMachinesLock for functional-767012: {Name:mk41cf89e93a3830367886ebbef2bb8f6e99e3f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:21:30.779070   54101 start.go:364] duration metric: took 80.115µs to acquireMachinesLock for "functional-767012"
	I1212 00:21:30.779088   54101 start.go:96] Skipping create...Using existing machine configuration
	I1212 00:21:30.779093   54101 fix.go:54] fixHost starting: 
	I1212 00:21:30.779346   54101 cli_runner.go:164] Run: docker container inspect functional-767012 --format={{.State.Status}}
	I1212 00:21:30.795901   54101 fix.go:112] recreateIfNeeded on functional-767012: state=Running err=<nil>
	W1212 00:21:30.795920   54101 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 00:21:30.799043   54101 out.go:252] * Updating the running docker "functional-767012" container ...
	I1212 00:21:30.799064   54101 machine.go:94] provisionDockerMachine start ...
	I1212 00:21:30.799139   54101 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:21:30.816214   54101 main.go:143] libmachine: Using SSH client type: native
	I1212 00:21:30.816539   54101 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 00:21:30.816545   54101 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 00:21:30.966929   54101 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-767012
	
	I1212 00:21:30.966943   54101 ubuntu.go:182] provisioning hostname "functional-767012"
	I1212 00:21:30.967026   54101 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:21:30.983921   54101 main.go:143] libmachine: Using SSH client type: native
	I1212 00:21:30.984212   54101 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 00:21:30.984220   54101 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-767012 && echo "functional-767012" | sudo tee /etc/hostname
	I1212 00:21:31.148238   54101 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-767012
	
	I1212 00:21:31.148339   54101 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:21:31.167090   54101 main.go:143] libmachine: Using SSH client type: native
	I1212 00:21:31.167393   54101 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 00:21:31.167407   54101 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-767012' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-767012/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-767012' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:21:31.315620   54101 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:21:31.315644   54101 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-2343/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-2343/.minikube}
	I1212 00:21:31.315665   54101 ubuntu.go:190] setting up certificates
	I1212 00:21:31.315680   54101 provision.go:84] configureAuth start
	I1212 00:21:31.315738   54101 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-767012
	I1212 00:21:31.348126   54101 provision.go:143] copyHostCerts
	I1212 00:21:31.348184   54101 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem, removing ...
	I1212 00:21:31.348191   54101 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem
	I1212 00:21:31.348265   54101 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem (1675 bytes)
	I1212 00:21:31.348353   54101 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem, removing ...
	I1212 00:21:31.348357   54101 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem
	I1212 00:21:31.348380   54101 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem (1082 bytes)
	I1212 00:21:31.348433   54101 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem, removing ...
	I1212 00:21:31.348436   54101 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem
	I1212 00:21:31.348457   54101 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem (1123 bytes)
	I1212 00:21:31.348500   54101 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem org=jenkins.functional-767012 san=[127.0.0.1 192.168.49.2 functional-767012 localhost minikube]
	I1212 00:21:31.571131   54101 provision.go:177] copyRemoteCerts
	I1212 00:21:31.571185   54101 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:21:31.571226   54101 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:21:31.588332   54101 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:21:31.690410   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 00:21:31.707240   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 00:21:31.724075   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:21:31.740524   54101 provision.go:87] duration metric: took 424.823605ms to configureAuth
	I1212 00:21:31.740541   54101 ubuntu.go:206] setting minikube options for container-runtime
	I1212 00:21:31.740761   54101 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 00:21:31.740771   54101 machine.go:97] duration metric: took 941.698571ms to provisionDockerMachine
	I1212 00:21:31.740778   54101 start.go:293] postStartSetup for "functional-767012" (driver="docker")
	I1212 00:21:31.740788   54101 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:21:31.740838   54101 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:21:31.740873   54101 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:21:31.758388   54101 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:21:31.866987   54101 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:21:31.870573   54101 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:21:31.870591   54101 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 00:21:31.870603   54101 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/addons for local assets ...
	I1212 00:21:31.870659   54101 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/files for local assets ...
	I1212 00:21:31.870732   54101 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem -> 42902.pem in /etc/ssl/certs
	I1212 00:21:31.870809   54101 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/test/nested/copy/4290/hosts -> hosts in /etc/test/nested/copy/4290
	I1212 00:21:31.870853   54101 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4290
	I1212 00:21:31.878221   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /etc/ssl/certs/42902.pem (1708 bytes)
	I1212 00:21:31.898601   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/test/nested/copy/4290/hosts --> /etc/test/nested/copy/4290/hosts (40 bytes)
	I1212 00:21:31.917863   54101 start.go:296] duration metric: took 177.070825ms for postStartSetup
	I1212 00:21:31.917948   54101 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:21:31.917994   54101 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:21:31.934865   54101 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:21:32.037797   54101 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:21:32.044535   54101 fix.go:56] duration metric: took 1.265435742s for fixHost
	I1212 00:21:32.044551   54101 start.go:83] releasing machines lock for "functional-767012", held for 1.265473363s
	I1212 00:21:32.044634   54101 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-767012
	I1212 00:21:32.063486   54101 ssh_runner.go:195] Run: cat /version.json
	I1212 00:21:32.063525   54101 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:21:32.063754   54101 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:21:32.063796   54101 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:21:32.082463   54101 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:21:32.110313   54101 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:21:32.198490   54101 ssh_runner.go:195] Run: systemctl --version
	I1212 00:21:32.295700   54101 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:21:32.300162   54101 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:21:32.300220   54101 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:21:32.308110   54101 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 00:21:32.308123   54101 start.go:496] detecting cgroup driver to use...
	I1212 00:21:32.308152   54101 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 00:21:32.308196   54101 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 00:21:32.324857   54101 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 00:21:32.337980   54101 docker.go:218] disabling cri-docker service (if available) ...
	I1212 00:21:32.338034   54101 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:21:32.353838   54101 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:21:32.367832   54101 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:21:32.501329   54101 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:21:32.628856   54101 docker.go:234] disabling docker service ...
	I1212 00:21:32.628933   54101 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:21:32.643664   54101 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:21:32.657070   54101 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:21:32.773509   54101 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:21:32.920829   54101 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:21:32.933710   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:21:32.947319   54101 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 00:21:32.956944   54101 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 00:21:32.966825   54101 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 00:21:32.966891   54101 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 00:21:32.976378   54101 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 00:21:32.985341   54101 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 00:21:32.995459   54101 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 00:21:33.011573   54101 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:21:33.020559   54101 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 00:21:33.029747   54101 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 00:21:33.038731   54101 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 00:21:33.048050   54101 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:21:33.056172   54101 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:21:33.063953   54101 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:21:33.190754   54101 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 00:21:33.330744   54101 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1212 00:21:33.330802   54101 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1212 00:21:33.334307   54101 start.go:564] Will wait 60s for crictl version
	I1212 00:21:33.334373   54101 ssh_runner.go:195] Run: which crictl
	I1212 00:21:33.337855   54101 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 00:21:33.361388   54101 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1212 00:21:33.361444   54101 ssh_runner.go:195] Run: containerd --version
	I1212 00:21:33.383087   54101 ssh_runner.go:195] Run: containerd --version
	I1212 00:21:33.409485   54101 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1212 00:21:33.412580   54101 cli_runner.go:164] Run: docker network inspect functional-767012 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:21:33.429552   54101 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 00:21:33.436766   54101 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1212 00:21:33.439631   54101 kubeadm.go:884] updating cluster {Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:21:33.439814   54101 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 00:21:33.439917   54101 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:21:33.465266   54101 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 00:21:33.465277   54101 containerd.go:534] Images already preloaded, skipping extraction
	I1212 00:21:33.465345   54101 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:21:33.495685   54101 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 00:21:33.495696   54101 cache_images.go:86] Images are preloaded, skipping loading
	I1212 00:21:33.495703   54101 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1212 00:21:33.495800   54101 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-767012 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:21:33.495863   54101 ssh_runner.go:195] Run: sudo crictl info
	I1212 00:21:33.520655   54101 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1212 00:21:33.520679   54101 cni.go:84] Creating CNI manager for ""
	I1212 00:21:33.520688   54101 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 00:21:33.520701   54101 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 00:21:33.520721   54101 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-767012 NodeName:functional-767012 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:21:33.520840   54101 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-767012"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:21:33.520909   54101 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 00:21:33.528771   54101 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 00:21:33.528832   54101 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:21:33.537845   54101 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1212 00:21:33.552578   54101 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 00:21:33.567275   54101 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1212 00:21:33.581608   54101 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:21:33.586017   54101 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:21:33.720787   54101 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:21:34.285938   54101 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012 for IP: 192.168.49.2
	I1212 00:21:34.285949   54101 certs.go:195] generating shared ca certs ...
	I1212 00:21:34.285964   54101 certs.go:227] acquiring lock for ca certs: {Name:mk18ed2fce74cbc4ee01c0f71e2dbdd98ccce1cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:21:34.286114   54101 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key
	I1212 00:21:34.286160   54101 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key
	I1212 00:21:34.286167   54101 certs.go:257] generating profile certs ...
	I1212 00:21:34.286262   54101 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.key
	I1212 00:21:34.286326   54101 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.key.fcbff5a4
	I1212 00:21:34.286371   54101 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.key
	I1212 00:21:34.286484   54101 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem (1338 bytes)
	W1212 00:21:34.286514   54101 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290_empty.pem, impossibly tiny 0 bytes
	I1212 00:21:34.286521   54101 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 00:21:34.286547   54101 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem (1082 bytes)
	I1212 00:21:34.286569   54101 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:21:34.286590   54101 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem (1675 bytes)
	I1212 00:21:34.286633   54101 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem (1708 bytes)
	I1212 00:21:34.287348   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:21:34.308553   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 00:21:34.331894   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:21:34.355464   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:21:34.374443   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 00:21:34.393434   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 00:21:34.411599   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:21:34.429619   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:21:34.447321   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /usr/share/ca-certificates/42902.pem (1708 bytes)
	I1212 00:21:34.464997   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:21:34.482627   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem --> /usr/share/ca-certificates/4290.pem (1338 bytes)
	I1212 00:21:34.500926   54101 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:21:34.513622   54101 ssh_runner.go:195] Run: openssl version
	I1212 00:21:34.519764   54101 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4290.pem
	I1212 00:21:34.527069   54101 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4290.pem /etc/ssl/certs/4290.pem
	I1212 00:21:34.534472   54101 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4290.pem
	I1212 00:21:34.538121   54101 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:06 /usr/share/ca-certificates/4290.pem
	I1212 00:21:34.538179   54101 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4290.pem
	I1212 00:21:34.579437   54101 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 00:21:34.586891   54101 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42902.pem
	I1212 00:21:34.594262   54101 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42902.pem /etc/ssl/certs/42902.pem
	I1212 00:21:34.601868   54101 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42902.pem
	I1212 00:21:34.605501   54101 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:06 /usr/share/ca-certificates/42902.pem
	I1212 00:21:34.605557   54101 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42902.pem
	I1212 00:21:34.646393   54101 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 00:21:34.653807   54101 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:21:34.661225   54101 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:21:34.668768   54101 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:21:34.672511   54101 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:21:34.672567   54101 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:21:34.713655   54101 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:21:34.721031   54101 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:21:34.724786   54101 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 00:21:34.765815   54101 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 00:21:34.806690   54101 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 00:21:34.847558   54101 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 00:21:34.888576   54101 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 00:21:34.933434   54101 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 00:21:34.978399   54101 kubeadm.go:401] StartCluster: {Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:21:34.978479   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1212 00:21:34.978543   54101 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:21:35.017576   54101 cri.go:89] found id: ""
	I1212 00:21:35.017638   54101 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:21:35.026096   54101 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 00:21:35.026118   54101 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 00:21:35.026171   54101 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 00:21:35.034785   54101 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:21:35.035314   54101 kubeconfig.go:125] found "functional-767012" server: "https://192.168.49.2:8441"
	I1212 00:21:35.036573   54101 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 00:21:35.046414   54101 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-12 00:07:00.613095536 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-12 00:21:33.576611675 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1212 00:21:35.046427   54101 kubeadm.go:1161] stopping kube-system containers ...
	I1212 00:21:35.046437   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1212 00:21:35.046492   54101 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:21:35.082797   54101 cri.go:89] found id: ""
	I1212 00:21:35.082857   54101 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 00:21:35.102877   54101 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:21:35.111403   54101 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 12 00:11 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 12 00:11 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 12 00:11 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 12 00:11 /etc/kubernetes/scheduler.conf
	
	I1212 00:21:35.111465   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 00:21:35.120302   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 00:21:35.128075   54101 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:21:35.128131   54101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:21:35.135780   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 00:21:35.143743   54101 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:21:35.143796   54101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:21:35.151555   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 00:21:35.159766   54101 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:21:35.159823   54101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:21:35.167617   54101 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:21:35.175675   54101 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:21:35.223997   54101 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:21:36.520500   54101 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.296478898s)
	I1212 00:21:36.520559   54101 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:21:36.729554   54101 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:21:36.788511   54101 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:21:36.835897   54101 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:21:36.835964   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:37.336817   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:37.836795   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:38.336842   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:38.836903   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:39.336145   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:39.836069   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:40.336948   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:40.837012   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:41.336101   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:41.836925   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:42.336725   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:42.836125   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:43.336921   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:43.836180   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:44.336837   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:44.836956   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:45.336777   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:45.836993   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:46.336836   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:46.836176   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:47.336095   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:47.836055   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:48.336741   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:48.836121   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:49.336917   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:49.836413   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:50.336092   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:50.836150   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:51.337033   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:51.836957   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:52.336084   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:52.836739   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:53.336118   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:53.836933   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:54.336879   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:54.836792   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:55.336817   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:55.836920   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:56.336115   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:56.836712   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:57.336349   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:57.836961   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:58.336641   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:58.836512   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:59.336849   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:59.836072   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:00.336133   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:00.836802   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:01.336983   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:01.836131   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:02.336195   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:02.836806   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:03.336993   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:03.837131   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:04.336098   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:04.836118   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:05.336315   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:05.837043   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:06.336091   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:06.836161   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:07.336123   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:07.836157   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:08.336214   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:08.836176   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:09.336152   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:09.836160   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:10.336024   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:10.836954   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:11.337041   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:11.836824   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:12.336075   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:12.836181   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:13.336397   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:13.836099   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:14.336156   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:14.836195   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:15.336313   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:15.836956   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:16.336943   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:16.836999   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:17.336149   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:17.836085   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:18.336339   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:18.836154   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:19.336945   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:19.836761   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:20.336721   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:20.837012   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:21.336764   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:21.836154   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:22.336203   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:22.836095   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:23.336255   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:23.836950   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:24.336879   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:24.836852   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:25.336786   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:25.836079   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:26.336767   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:26.836193   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:27.336157   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:27.836115   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:28.336182   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:28.836772   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:29.336188   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:29.836047   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:30.336792   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:30.836649   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:31.337030   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:31.836180   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:32.336198   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:32.837057   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:33.336991   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:33.836801   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:34.336920   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:34.836119   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:35.337050   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:35.836716   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:36.336423   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:36.836018   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:22:36.836096   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:22:36.862427   54101 cri.go:89] found id: ""
	I1212 00:22:36.862441   54101 logs.go:282] 0 containers: []
	W1212 00:22:36.862448   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:22:36.862453   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:22:36.862517   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:22:36.892149   54101 cri.go:89] found id: ""
	I1212 00:22:36.892163   54101 logs.go:282] 0 containers: []
	W1212 00:22:36.892169   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:22:36.892175   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:22:36.892234   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:22:36.916655   54101 cri.go:89] found id: ""
	I1212 00:22:36.916670   54101 logs.go:282] 0 containers: []
	W1212 00:22:36.916677   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:22:36.916681   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:22:36.916753   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:22:36.945533   54101 cri.go:89] found id: ""
	I1212 00:22:36.945546   54101 logs.go:282] 0 containers: []
	W1212 00:22:36.945554   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:22:36.945559   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:22:36.945616   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:22:36.970456   54101 cri.go:89] found id: ""
	I1212 00:22:36.970469   54101 logs.go:282] 0 containers: []
	W1212 00:22:36.970477   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:22:36.970482   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:22:36.970556   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:22:36.997550   54101 cri.go:89] found id: ""
	I1212 00:22:36.997568   54101 logs.go:282] 0 containers: []
	W1212 00:22:36.997577   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:22:36.997582   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:22:36.997656   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:22:37.043296   54101 cri.go:89] found id: ""
	I1212 00:22:37.043319   54101 logs.go:282] 0 containers: []
	W1212 00:22:37.043326   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:22:37.043334   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:22:37.043344   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:22:37.115314   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:22:37.115335   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:22:37.126489   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:22:37.126505   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:22:37.191880   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:22:37.183564   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:37.183995   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:37.185555   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:37.185892   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:37.187528   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:22:37.183564   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:37.183995   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:37.185555   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:37.185892   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:37.187528   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:22:37.191890   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:22:37.191900   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:22:37.253331   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:22:37.253349   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:22:39.783593   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:39.793972   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:22:39.794055   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:22:39.822155   54101 cri.go:89] found id: ""
	I1212 00:22:39.822169   54101 logs.go:282] 0 containers: []
	W1212 00:22:39.822176   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:22:39.822181   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:22:39.822250   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:22:39.847125   54101 cri.go:89] found id: ""
	I1212 00:22:39.847138   54101 logs.go:282] 0 containers: []
	W1212 00:22:39.847145   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:22:39.847150   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:22:39.847210   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:22:39.872050   54101 cri.go:89] found id: ""
	I1212 00:22:39.872064   54101 logs.go:282] 0 containers: []
	W1212 00:22:39.872072   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:22:39.872077   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:22:39.872143   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:22:39.896579   54101 cri.go:89] found id: ""
	I1212 00:22:39.896592   54101 logs.go:282] 0 containers: []
	W1212 00:22:39.896599   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:22:39.896606   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:22:39.896664   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:22:39.921505   54101 cri.go:89] found id: ""
	I1212 00:22:39.921520   54101 logs.go:282] 0 containers: []
	W1212 00:22:39.921537   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:22:39.921543   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:22:39.921602   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:22:39.949647   54101 cri.go:89] found id: ""
	I1212 00:22:39.949660   54101 logs.go:282] 0 containers: []
	W1212 00:22:39.949667   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:22:39.949672   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:22:39.949739   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:22:39.972863   54101 cri.go:89] found id: ""
	I1212 00:22:39.972877   54101 logs.go:282] 0 containers: []
	W1212 00:22:39.972886   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:22:39.972894   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:22:39.972904   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:22:39.983379   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:22:39.983394   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:22:40.083583   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:22:40.071923   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:40.073365   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:40.075724   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:40.076148   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:40.078746   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:22:40.071923   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:40.073365   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:40.075724   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:40.076148   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:40.078746   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:22:40.083593   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:22:40.083604   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:22:40.153645   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:22:40.153664   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:22:40.181452   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:22:40.181471   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:22:42.742128   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:42.752298   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:22:42.752357   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:22:42.777204   54101 cri.go:89] found id: ""
	I1212 00:22:42.777218   54101 logs.go:282] 0 containers: []
	W1212 00:22:42.777225   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:22:42.777236   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:22:42.777295   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:22:42.801649   54101 cri.go:89] found id: ""
	I1212 00:22:42.801663   54101 logs.go:282] 0 containers: []
	W1212 00:22:42.801670   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:22:42.801675   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:22:42.801731   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:22:42.826035   54101 cri.go:89] found id: ""
	I1212 00:22:42.826048   54101 logs.go:282] 0 containers: []
	W1212 00:22:42.826055   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:22:42.826059   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:22:42.826131   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:22:42.853290   54101 cri.go:89] found id: ""
	I1212 00:22:42.853303   54101 logs.go:282] 0 containers: []
	W1212 00:22:42.853310   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:22:42.853316   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:22:42.853372   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:22:42.880012   54101 cri.go:89] found id: ""
	I1212 00:22:42.880025   54101 logs.go:282] 0 containers: []
	W1212 00:22:42.880033   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:22:42.880037   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:22:42.880097   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:22:42.909253   54101 cri.go:89] found id: ""
	I1212 00:22:42.909267   54101 logs.go:282] 0 containers: []
	W1212 00:22:42.909274   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:22:42.909279   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:22:42.909335   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:22:42.936731   54101 cri.go:89] found id: ""
	I1212 00:22:42.936745   54101 logs.go:282] 0 containers: []
	W1212 00:22:42.936756   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:22:42.936764   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:22:42.936782   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:22:42.991768   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:22:42.991787   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:22:43.005267   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:22:43.005283   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:22:43.089221   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:22:43.080335   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:43.081099   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:43.082720   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:43.083301   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:43.084856   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:22:43.080335   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:43.081099   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:43.082720   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:43.083301   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:43.084856   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:22:43.089233   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:22:43.089244   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:22:43.153170   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:22:43.153191   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:22:45.684515   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:45.696038   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:22:45.696106   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:22:45.721408   54101 cri.go:89] found id: ""
	I1212 00:22:45.721422   54101 logs.go:282] 0 containers: []
	W1212 00:22:45.721439   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:22:45.721446   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:22:45.721518   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:22:45.746760   54101 cri.go:89] found id: ""
	I1212 00:22:45.746774   54101 logs.go:282] 0 containers: []
	W1212 00:22:45.746781   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:22:45.746794   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:22:45.746852   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:22:45.784086   54101 cri.go:89] found id: ""
	I1212 00:22:45.784100   54101 logs.go:282] 0 containers: []
	W1212 00:22:45.784107   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:22:45.784113   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:22:45.784196   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:22:45.809513   54101 cri.go:89] found id: ""
	I1212 00:22:45.809527   54101 logs.go:282] 0 containers: []
	W1212 00:22:45.809534   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:22:45.809547   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:22:45.809603   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:22:45.833922   54101 cri.go:89] found id: ""
	I1212 00:22:45.833935   54101 logs.go:282] 0 containers: []
	W1212 00:22:45.833943   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:22:45.833957   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:22:45.834020   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:22:45.858716   54101 cri.go:89] found id: ""
	I1212 00:22:45.858738   54101 logs.go:282] 0 containers: []
	W1212 00:22:45.858745   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:22:45.858751   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:22:45.858819   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:22:45.886125   54101 cri.go:89] found id: ""
	I1212 00:22:45.886140   54101 logs.go:282] 0 containers: []
	W1212 00:22:45.886161   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:22:45.886170   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:22:45.886181   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:22:45.913706   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:22:45.913723   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:22:45.972155   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:22:45.972173   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:22:45.982756   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:22:45.982771   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:22:46.057549   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:22:46.048888   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:46.049652   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:46.050838   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:46.051562   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:46.053189   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:22:46.048888   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:46.049652   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:46.050838   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:46.051562   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:46.053189   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:22:46.057568   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:22:46.057589   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:22:48.631952   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:48.641871   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:22:48.641945   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:22:48.667026   54101 cri.go:89] found id: ""
	I1212 00:22:48.667040   54101 logs.go:282] 0 containers: []
	W1212 00:22:48.667047   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:22:48.667052   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:22:48.667111   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:22:48.694393   54101 cri.go:89] found id: ""
	I1212 00:22:48.694407   54101 logs.go:282] 0 containers: []
	W1212 00:22:48.694414   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:22:48.694419   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:22:48.694479   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:22:48.723393   54101 cri.go:89] found id: ""
	I1212 00:22:48.723406   54101 logs.go:282] 0 containers: []
	W1212 00:22:48.723413   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:22:48.723418   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:22:48.723480   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:22:48.749414   54101 cri.go:89] found id: ""
	I1212 00:22:48.749427   54101 logs.go:282] 0 containers: []
	W1212 00:22:48.749434   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:22:48.749440   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:22:48.749500   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:22:48.773494   54101 cri.go:89] found id: ""
	I1212 00:22:48.773508   54101 logs.go:282] 0 containers: []
	W1212 00:22:48.773514   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:22:48.773520   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:22:48.773584   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:22:48.798476   54101 cri.go:89] found id: ""
	I1212 00:22:48.798490   54101 logs.go:282] 0 containers: []
	W1212 00:22:48.798497   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:22:48.798502   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:22:48.798570   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:22:48.823097   54101 cri.go:89] found id: ""
	I1212 00:22:48.823112   54101 logs.go:282] 0 containers: []
	W1212 00:22:48.823119   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:22:48.823127   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:22:48.823136   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:22:48.884369   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:22:48.884390   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:22:48.918017   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:22:48.918032   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:22:48.974636   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:22:48.974656   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:22:48.985524   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:22:48.985540   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:22:49.075379   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:22:49.063866   11166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:49.064550   11166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:49.067284   11166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:49.067979   11166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:49.070881   11166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:22:49.063866   11166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:49.064550   11166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:49.067284   11166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:49.067979   11166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:49.070881   11166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:22:51.575612   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:51.585822   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:22:51.585880   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:22:51.611290   54101 cri.go:89] found id: ""
	I1212 00:22:51.611304   54101 logs.go:282] 0 containers: []
	W1212 00:22:51.611311   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:22:51.611317   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:22:51.611376   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:22:51.638852   54101 cri.go:89] found id: ""
	I1212 00:22:51.638868   54101 logs.go:282] 0 containers: []
	W1212 00:22:51.638875   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:22:51.638882   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:22:51.638941   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:22:51.663831   54101 cri.go:89] found id: ""
	I1212 00:22:51.663845   54101 logs.go:282] 0 containers: []
	W1212 00:22:51.663852   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:22:51.663857   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:22:51.663914   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:22:51.689264   54101 cri.go:89] found id: ""
	I1212 00:22:51.689278   54101 logs.go:282] 0 containers: []
	W1212 00:22:51.689286   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:22:51.689291   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:22:51.689350   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:22:51.714774   54101 cri.go:89] found id: ""
	I1212 00:22:51.714788   54101 logs.go:282] 0 containers: []
	W1212 00:22:51.714795   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:22:51.714800   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:22:51.714889   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:22:51.739800   54101 cri.go:89] found id: ""
	I1212 00:22:51.739814   54101 logs.go:282] 0 containers: []
	W1212 00:22:51.739822   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:22:51.739827   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:22:51.739885   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:22:51.767107   54101 cri.go:89] found id: ""
	I1212 00:22:51.767134   54101 logs.go:282] 0 containers: []
	W1212 00:22:51.767142   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:22:51.767150   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:22:51.767160   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:22:51.821534   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:22:51.821552   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:22:51.832147   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:22:51.832161   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:22:51.897869   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:22:51.890100   11256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:51.890663   11256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:51.892157   11256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:51.892582   11256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:51.894067   11256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:22:51.890100   11256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:51.890663   11256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:51.892157   11256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:51.892582   11256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:51.894067   11256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:22:51.897889   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:22:51.897899   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:22:51.958502   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:22:51.958519   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:22:54.487348   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:54.497592   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:22:54.497655   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:22:54.524765   54101 cri.go:89] found id: ""
	I1212 00:22:54.524779   54101 logs.go:282] 0 containers: []
	W1212 00:22:54.524787   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:22:54.524800   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:22:54.524860   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:22:54.549685   54101 cri.go:89] found id: ""
	I1212 00:22:54.549699   54101 logs.go:282] 0 containers: []
	W1212 00:22:54.549706   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:22:54.549710   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:22:54.549766   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:22:54.573523   54101 cri.go:89] found id: ""
	I1212 00:22:54.573537   54101 logs.go:282] 0 containers: []
	W1212 00:22:54.573544   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:22:54.573549   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:22:54.573607   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:22:54.602326   54101 cri.go:89] found id: ""
	I1212 00:22:54.602342   54101 logs.go:282] 0 containers: []
	W1212 00:22:54.602349   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:22:54.602354   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:22:54.602411   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:22:54.626746   54101 cri.go:89] found id: ""
	I1212 00:22:54.626777   54101 logs.go:282] 0 containers: []
	W1212 00:22:54.626784   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:22:54.626792   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:22:54.626860   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:22:54.652678   54101 cri.go:89] found id: ""
	I1212 00:22:54.652693   54101 logs.go:282] 0 containers: []
	W1212 00:22:54.652715   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:22:54.652720   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:22:54.652789   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:22:54.677588   54101 cri.go:89] found id: ""
	I1212 00:22:54.677602   54101 logs.go:282] 0 containers: []
	W1212 00:22:54.677609   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:22:54.677617   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:22:54.677627   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:22:54.733727   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:22:54.733750   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:22:54.744434   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:22:54.744450   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:22:54.810290   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:22:54.802232   11361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:54.802635   11361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:54.804258   11361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:54.804924   11361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:54.806440   11361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:22:54.802232   11361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:54.802635   11361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:54.804258   11361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:54.804924   11361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:54.806440   11361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:22:54.810301   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:22:54.810311   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:22:54.869777   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:22:54.869794   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:22:57.396960   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:57.406761   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:22:57.406819   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:22:57.431202   54101 cri.go:89] found id: ""
	I1212 00:22:57.431216   54101 logs.go:282] 0 containers: []
	W1212 00:22:57.431223   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:22:57.431228   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:22:57.431285   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:22:57.456103   54101 cri.go:89] found id: ""
	I1212 00:22:57.456116   54101 logs.go:282] 0 containers: []
	W1212 00:22:57.456123   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:22:57.456129   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:22:57.456185   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:22:57.482677   54101 cri.go:89] found id: ""
	I1212 00:22:57.482690   54101 logs.go:282] 0 containers: []
	W1212 00:22:57.482697   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:22:57.482703   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:22:57.482776   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:22:57.507899   54101 cri.go:89] found id: ""
	I1212 00:22:57.507912   54101 logs.go:282] 0 containers: []
	W1212 00:22:57.507919   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:22:57.507925   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:22:57.507986   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:22:57.536079   54101 cri.go:89] found id: ""
	I1212 00:22:57.536093   54101 logs.go:282] 0 containers: []
	W1212 00:22:57.536101   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:22:57.536106   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:22:57.536167   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:22:57.564822   54101 cri.go:89] found id: ""
	I1212 00:22:57.564836   54101 logs.go:282] 0 containers: []
	W1212 00:22:57.564843   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:22:57.564857   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:22:57.564923   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:22:57.589921   54101 cri.go:89] found id: ""
	I1212 00:22:57.589935   54101 logs.go:282] 0 containers: []
	W1212 00:22:57.589943   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:22:57.589951   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:22:57.589961   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:22:57.648534   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:22:57.648552   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:22:57.659464   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:22:57.659481   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:22:57.727477   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:22:57.718925   11464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:57.719812   11464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:57.721551   11464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:57.722035   11464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:57.723542   11464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:22:57.718925   11464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:57.719812   11464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:57.721551   11464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:57.722035   11464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:57.723542   11464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:22:57.727497   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:22:57.727508   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:22:57.791545   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:22:57.791567   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:00.319474   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:00.337512   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:00.337596   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:00.386008   54101 cri.go:89] found id: ""
	I1212 00:23:00.386034   54101 logs.go:282] 0 containers: []
	W1212 00:23:00.386042   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:00.386048   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:00.386118   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:00.435933   54101 cri.go:89] found id: ""
	I1212 00:23:00.435948   54101 logs.go:282] 0 containers: []
	W1212 00:23:00.435961   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:00.435966   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:00.436033   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:00.464332   54101 cri.go:89] found id: ""
	I1212 00:23:00.464347   54101 logs.go:282] 0 containers: []
	W1212 00:23:00.464354   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:00.464360   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:00.464438   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:00.492272   54101 cri.go:89] found id: ""
	I1212 00:23:00.492288   54101 logs.go:282] 0 containers: []
	W1212 00:23:00.492296   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:00.492308   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:00.492399   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:00.523157   54101 cri.go:89] found id: ""
	I1212 00:23:00.523172   54101 logs.go:282] 0 containers: []
	W1212 00:23:00.523180   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:00.523185   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:00.523251   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:00.551205   54101 cri.go:89] found id: ""
	I1212 00:23:00.551219   54101 logs.go:282] 0 containers: []
	W1212 00:23:00.551227   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:00.551232   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:00.551303   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:00.581595   54101 cri.go:89] found id: ""
	I1212 00:23:00.581609   54101 logs.go:282] 0 containers: []
	W1212 00:23:00.581616   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:00.581624   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:00.581637   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:00.638838   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:00.638857   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:00.650126   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:00.650141   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:00.717921   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:00.707574   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:00.709178   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:00.709927   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:00.711724   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:00.712419   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:00.707574   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:00.709178   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:00.709927   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:00.711724   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:00.712419   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:00.717933   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:00.717947   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:00.780105   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:00.780123   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:03.311322   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:03.323283   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:03.323344   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:03.361266   54101 cri.go:89] found id: ""
	I1212 00:23:03.361281   54101 logs.go:282] 0 containers: []
	W1212 00:23:03.361288   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:03.361293   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:03.361353   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:03.386333   54101 cri.go:89] found id: ""
	I1212 00:23:03.386347   54101 logs.go:282] 0 containers: []
	W1212 00:23:03.386353   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:03.386363   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:03.386421   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:03.413227   54101 cri.go:89] found id: ""
	I1212 00:23:03.413241   54101 logs.go:282] 0 containers: []
	W1212 00:23:03.413248   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:03.413253   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:03.413310   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:03.437970   54101 cri.go:89] found id: ""
	I1212 00:23:03.437991   54101 logs.go:282] 0 containers: []
	W1212 00:23:03.437999   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:03.438004   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:03.438060   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:03.466477   54101 cri.go:89] found id: ""
	I1212 00:23:03.466491   54101 logs.go:282] 0 containers: []
	W1212 00:23:03.466499   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:03.466504   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:03.466561   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:03.491808   54101 cri.go:89] found id: ""
	I1212 00:23:03.491821   54101 logs.go:282] 0 containers: []
	W1212 00:23:03.491828   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:03.491834   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:03.491890   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:03.517149   54101 cri.go:89] found id: ""
	I1212 00:23:03.517163   54101 logs.go:282] 0 containers: []
	W1212 00:23:03.517170   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:03.517177   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:03.517187   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:03.572746   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:03.572773   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:03.584001   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:03.584018   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:03.656247   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:03.647626   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:03.648470   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:03.650161   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:03.650723   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:03.652396   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:03.647626   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:03.648470   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:03.650161   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:03.650723   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:03.652396   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:03.656257   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:03.656268   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:03.722945   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:03.722971   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:06.251078   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:06.261552   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:06.261613   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:06.289582   54101 cri.go:89] found id: ""
	I1212 00:23:06.289597   54101 logs.go:282] 0 containers: []
	W1212 00:23:06.289605   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:06.289610   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:06.289673   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:06.317842   54101 cri.go:89] found id: ""
	I1212 00:23:06.317855   54101 logs.go:282] 0 containers: []
	W1212 00:23:06.317863   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:06.317868   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:06.317926   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:06.352672   54101 cri.go:89] found id: ""
	I1212 00:23:06.352685   54101 logs.go:282] 0 containers: []
	W1212 00:23:06.352692   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:06.352697   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:06.352752   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:06.382465   54101 cri.go:89] found id: ""
	I1212 00:23:06.382479   54101 logs.go:282] 0 containers: []
	W1212 00:23:06.382486   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:06.382491   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:06.382549   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:06.409293   54101 cri.go:89] found id: ""
	I1212 00:23:06.409307   54101 logs.go:282] 0 containers: []
	W1212 00:23:06.409325   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:06.409351   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:06.409419   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:06.437827   54101 cri.go:89] found id: ""
	I1212 00:23:06.437842   54101 logs.go:282] 0 containers: []
	W1212 00:23:06.437850   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:06.437855   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:06.437916   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:06.461631   54101 cri.go:89] found id: ""
	I1212 00:23:06.461645   54101 logs.go:282] 0 containers: []
	W1212 00:23:06.461652   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:06.461660   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:06.461672   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:06.524818   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:06.524837   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:06.555647   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:06.555663   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:06.613018   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:06.613037   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:06.623988   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:06.624004   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:06.689835   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:06.681072   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:06.681903   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:06.683626   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:06.684195   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:06.685841   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:06.681072   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:06.681903   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:06.683626   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:06.684195   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:06.685841   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:09.190077   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:09.199951   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:09.200011   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:09.224598   54101 cri.go:89] found id: ""
	I1212 00:23:09.224612   54101 logs.go:282] 0 containers: []
	W1212 00:23:09.224619   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:09.224624   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:09.224680   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:09.249246   54101 cri.go:89] found id: ""
	I1212 00:23:09.249259   54101 logs.go:282] 0 containers: []
	W1212 00:23:09.249266   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:09.249270   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:09.249326   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:09.276466   54101 cri.go:89] found id: ""
	I1212 00:23:09.276481   54101 logs.go:282] 0 containers: []
	W1212 00:23:09.276488   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:09.276493   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:09.276569   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:09.305292   54101 cri.go:89] found id: ""
	I1212 00:23:09.305306   54101 logs.go:282] 0 containers: []
	W1212 00:23:09.305320   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:09.305325   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:09.305385   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:09.340249   54101 cri.go:89] found id: ""
	I1212 00:23:09.340263   54101 logs.go:282] 0 containers: []
	W1212 00:23:09.340269   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:09.340274   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:09.340335   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:09.371473   54101 cri.go:89] found id: ""
	I1212 00:23:09.371487   54101 logs.go:282] 0 containers: []
	W1212 00:23:09.371494   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:09.371499   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:09.371560   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:09.396595   54101 cri.go:89] found id: ""
	I1212 00:23:09.396611   54101 logs.go:282] 0 containers: []
	W1212 00:23:09.396618   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:09.396626   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:09.396639   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:09.455271   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:09.455288   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:09.465948   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:09.465963   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:09.533532   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:09.524698   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:09.525522   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:09.527378   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:09.527995   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:09.529577   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:09.524698   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:09.525522   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:09.527378   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:09.527995   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:09.529577   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:09.533544   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:09.533554   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:09.595751   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:09.595769   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:12.124276   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:12.134222   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:12.134281   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:12.158363   54101 cri.go:89] found id: ""
	I1212 00:23:12.158377   54101 logs.go:282] 0 containers: []
	W1212 00:23:12.158384   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:12.158390   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:12.158446   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:12.181913   54101 cri.go:89] found id: ""
	I1212 00:23:12.181930   54101 logs.go:282] 0 containers: []
	W1212 00:23:12.181936   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:12.181941   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:12.181997   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:12.206035   54101 cri.go:89] found id: ""
	I1212 00:23:12.206048   54101 logs.go:282] 0 containers: []
	W1212 00:23:12.206055   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:12.206060   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:12.206119   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:12.234593   54101 cri.go:89] found id: ""
	I1212 00:23:12.234606   54101 logs.go:282] 0 containers: []
	W1212 00:23:12.234614   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:12.234618   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:12.234675   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:12.258839   54101 cri.go:89] found id: ""
	I1212 00:23:12.258853   54101 logs.go:282] 0 containers: []
	W1212 00:23:12.258867   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:12.258873   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:12.258931   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:12.295188   54101 cri.go:89] found id: ""
	I1212 00:23:12.295202   54101 logs.go:282] 0 containers: []
	W1212 00:23:12.295219   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:12.295225   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:12.295295   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:12.331819   54101 cri.go:89] found id: ""
	I1212 00:23:12.331833   54101 logs.go:282] 0 containers: []
	W1212 00:23:12.331851   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:12.331859   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:12.331869   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:12.392019   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:12.392036   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:12.402367   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:12.402383   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:12.463715   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:12.455582   11990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:12.455962   11990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:12.457659   11990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:12.458359   11990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:12.459974   11990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:12.455582   11990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:12.455962   11990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:12.457659   11990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:12.458359   11990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:12.459974   11990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:12.463724   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:12.463745   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:12.528182   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:12.528200   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:15.057258   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:15.068358   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:15.068421   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:15.094774   54101 cri.go:89] found id: ""
	I1212 00:23:15.094787   54101 logs.go:282] 0 containers: []
	W1212 00:23:15.094804   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:15.094812   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:15.094882   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:15.120167   54101 cri.go:89] found id: ""
	I1212 00:23:15.120180   54101 logs.go:282] 0 containers: []
	W1212 00:23:15.120188   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:15.120193   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:15.120249   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:15.150855   54101 cri.go:89] found id: ""
	I1212 00:23:15.150868   54101 logs.go:282] 0 containers: []
	W1212 00:23:15.150886   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:15.150891   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:15.150958   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:15.179684   54101 cri.go:89] found id: ""
	I1212 00:23:15.179697   54101 logs.go:282] 0 containers: []
	W1212 00:23:15.179704   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:15.179709   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:15.179784   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:15.204315   54101 cri.go:89] found id: ""
	I1212 00:23:15.204338   54101 logs.go:282] 0 containers: []
	W1212 00:23:15.204345   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:15.204350   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:15.204425   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:15.229074   54101 cri.go:89] found id: ""
	I1212 00:23:15.229088   54101 logs.go:282] 0 containers: []
	W1212 00:23:15.229095   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:15.229103   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:15.229168   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:15.253510   54101 cri.go:89] found id: ""
	I1212 00:23:15.253532   54101 logs.go:282] 0 containers: []
	W1212 00:23:15.253540   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:15.253548   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:15.253559   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:15.264299   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:15.264317   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:15.346071   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:15.332347   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:15.334627   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:15.335427   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:15.337189   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:15.337763   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:15.332347   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:15.334627   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:15.335427   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:15.337189   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:15.337763   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:15.346082   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:15.346092   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:15.414287   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:15.414306   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:15.440115   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:15.440130   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:17.999409   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:18.010537   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:18.010603   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:18.036961   54101 cri.go:89] found id: ""
	I1212 00:23:18.036975   54101 logs.go:282] 0 containers: []
	W1212 00:23:18.036982   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:18.036988   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:18.037047   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:18.065553   54101 cri.go:89] found id: ""
	I1212 00:23:18.065568   54101 logs.go:282] 0 containers: []
	W1212 00:23:18.065575   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:18.065582   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:18.065643   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:18.090902   54101 cri.go:89] found id: ""
	I1212 00:23:18.090916   54101 logs.go:282] 0 containers: []
	W1212 00:23:18.090923   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:18.090927   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:18.090987   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:18.120598   54101 cri.go:89] found id: ""
	I1212 00:23:18.120611   54101 logs.go:282] 0 containers: []
	W1212 00:23:18.120618   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:18.120623   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:18.120686   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:18.147780   54101 cri.go:89] found id: ""
	I1212 00:23:18.147794   54101 logs.go:282] 0 containers: []
	W1212 00:23:18.147801   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:18.147806   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:18.147863   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:18.176272   54101 cri.go:89] found id: ""
	I1212 00:23:18.176286   54101 logs.go:282] 0 containers: []
	W1212 00:23:18.176293   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:18.176306   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:18.176368   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:18.201024   54101 cri.go:89] found id: ""
	I1212 00:23:18.201037   54101 logs.go:282] 0 containers: []
	W1212 00:23:18.201045   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:18.201052   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:18.201062   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:18.211552   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:18.211566   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:18.274135   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:18.266305   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:18.266699   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:18.268383   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:18.268854   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:18.270264   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:18.266305   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:18.266699   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:18.268383   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:18.268854   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:18.270264   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:18.274145   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:18.274155   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:18.339516   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:18.339534   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:18.369221   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:18.369236   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:20.928503   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:20.938705   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:20.938771   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:20.966429   54101 cri.go:89] found id: ""
	I1212 00:23:20.966442   54101 logs.go:282] 0 containers: []
	W1212 00:23:20.966449   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:20.966463   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:20.966521   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:20.993659   54101 cri.go:89] found id: ""
	I1212 00:23:20.993674   54101 logs.go:282] 0 containers: []
	W1212 00:23:20.993694   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:20.993700   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:20.993783   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:21.021877   54101 cri.go:89] found id: ""
	I1212 00:23:21.021894   54101 logs.go:282] 0 containers: []
	W1212 00:23:21.021901   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:21.021907   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:21.021974   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:21.050301   54101 cri.go:89] found id: ""
	I1212 00:23:21.050315   54101 logs.go:282] 0 containers: []
	W1212 00:23:21.050333   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:21.050338   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:21.050394   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:21.074369   54101 cri.go:89] found id: ""
	I1212 00:23:21.074382   54101 logs.go:282] 0 containers: []
	W1212 00:23:21.074399   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:21.074404   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:21.074459   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:21.100847   54101 cri.go:89] found id: ""
	I1212 00:23:21.100860   54101 logs.go:282] 0 containers: []
	W1212 00:23:21.100867   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:21.100872   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:21.100930   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:21.129915   54101 cri.go:89] found id: ""
	I1212 00:23:21.129928   54101 logs.go:282] 0 containers: []
	W1212 00:23:21.129950   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:21.129958   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:21.129967   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:21.186387   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:21.186407   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:21.197421   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:21.197437   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:21.261078   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:21.252661   12293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:21.253431   12293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:21.255174   12293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:21.255799   12293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:21.257304   12293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:21.252661   12293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:21.253431   12293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:21.255174   12293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:21.255799   12293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:21.257304   12293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:21.261090   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:21.261104   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:21.326885   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:21.326903   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:23.859105   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:23.869083   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:23.869143   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:23.892667   54101 cri.go:89] found id: ""
	I1212 00:23:23.892681   54101 logs.go:282] 0 containers: []
	W1212 00:23:23.892688   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:23.892693   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:23.892755   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:23.916368   54101 cri.go:89] found id: ""
	I1212 00:23:23.916381   54101 logs.go:282] 0 containers: []
	W1212 00:23:23.916388   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:23.916393   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:23.916456   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:23.953674   54101 cri.go:89] found id: ""
	I1212 00:23:23.953688   54101 logs.go:282] 0 containers: []
	W1212 00:23:23.953695   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:23.953700   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:23.953755   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:23.977280   54101 cri.go:89] found id: ""
	I1212 00:23:23.977293   54101 logs.go:282] 0 containers: []
	W1212 00:23:23.977300   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:23.977305   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:23.977364   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:24.002961   54101 cri.go:89] found id: ""
	I1212 00:23:24.002985   54101 logs.go:282] 0 containers: []
	W1212 00:23:24.003014   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:24.003020   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:24.003098   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:24.034368   54101 cri.go:89] found id: ""
	I1212 00:23:24.034382   54101 logs.go:282] 0 containers: []
	W1212 00:23:24.034393   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:24.034398   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:24.034470   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:24.065761   54101 cri.go:89] found id: ""
	I1212 00:23:24.065775   54101 logs.go:282] 0 containers: []
	W1212 00:23:24.065788   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:24.065796   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:24.065806   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:24.122870   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:24.122890   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:24.134384   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:24.134398   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:24.204008   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:24.196235   12399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:24.196812   12399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:24.198515   12399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:24.198869   12399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:24.200088   12399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:24.196235   12399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:24.196812   12399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:24.198515   12399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:24.198869   12399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:24.200088   12399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:24.204018   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:24.204029   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:24.268817   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:24.268835   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:26.805407   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:26.815561   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:26.815619   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:26.843361   54101 cri.go:89] found id: ""
	I1212 00:23:26.843375   54101 logs.go:282] 0 containers: []
	W1212 00:23:26.843382   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:26.843388   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:26.843447   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:26.867615   54101 cri.go:89] found id: ""
	I1212 00:23:26.867630   54101 logs.go:282] 0 containers: []
	W1212 00:23:26.867637   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:26.867642   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:26.867698   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:26.897089   54101 cri.go:89] found id: ""
	I1212 00:23:26.897102   54101 logs.go:282] 0 containers: []
	W1212 00:23:26.897109   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:26.897114   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:26.897173   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:26.920797   54101 cri.go:89] found id: ""
	I1212 00:23:26.920810   54101 logs.go:282] 0 containers: []
	W1212 00:23:26.920817   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:26.920822   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:26.920878   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:26.948949   54101 cri.go:89] found id: ""
	I1212 00:23:26.948963   54101 logs.go:282] 0 containers: []
	W1212 00:23:26.948970   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:26.948975   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:26.949034   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:26.972541   54101 cri.go:89] found id: ""
	I1212 00:23:26.972555   54101 logs.go:282] 0 containers: []
	W1212 00:23:26.972563   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:26.972568   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:26.972631   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:26.998049   54101 cri.go:89] found id: ""
	I1212 00:23:26.998065   54101 logs.go:282] 0 containers: []
	W1212 00:23:26.998073   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:26.998089   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:26.998102   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:27.027523   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:27.027538   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:27.085127   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:27.085146   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:27.096087   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:27.096101   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:27.162090   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:27.153308   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:27.154010   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:27.155943   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:27.156645   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:27.158348   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:27.153308   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:27.154010   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:27.155943   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:27.156645   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:27.158348   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:27.162101   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:27.162111   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:29.728366   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:29.738393   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:29.738452   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:29.764004   54101 cri.go:89] found id: ""
	I1212 00:23:29.764017   54101 logs.go:282] 0 containers: []
	W1212 00:23:29.764024   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:29.764029   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:29.764089   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:29.787843   54101 cri.go:89] found id: ""
	I1212 00:23:29.787857   54101 logs.go:282] 0 containers: []
	W1212 00:23:29.787874   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:29.787879   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:29.787936   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:29.812859   54101 cri.go:89] found id: ""
	I1212 00:23:29.812872   54101 logs.go:282] 0 containers: []
	W1212 00:23:29.812879   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:29.812884   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:29.812941   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:29.837580   54101 cri.go:89] found id: ""
	I1212 00:23:29.837593   54101 logs.go:282] 0 containers: []
	W1212 00:23:29.837600   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:29.837605   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:29.837673   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:29.861535   54101 cri.go:89] found id: ""
	I1212 00:23:29.861560   54101 logs.go:282] 0 containers: []
	W1212 00:23:29.861567   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:29.861572   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:29.861644   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:29.886533   54101 cri.go:89] found id: ""
	I1212 00:23:29.886546   54101 logs.go:282] 0 containers: []
	W1212 00:23:29.886553   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:29.886559   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:29.886624   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:29.913577   54101 cri.go:89] found id: ""
	I1212 00:23:29.913604   54101 logs.go:282] 0 containers: []
	W1212 00:23:29.913611   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:29.913619   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:29.913630   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:29.940660   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:29.940675   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:29.995286   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:29.995307   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:30.029235   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:30.029252   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:30.103143   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:30.093717   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:30.094664   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:30.096287   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:30.096764   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:30.098381   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:30.093717   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:30.094664   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:30.096287   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:30.096764   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:30.098381   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:30.103157   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:30.103168   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:32.666081   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:32.676000   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:32.676071   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:32.701112   54101 cri.go:89] found id: ""
	I1212 00:23:32.701125   54101 logs.go:282] 0 containers: []
	W1212 00:23:32.701133   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:32.701138   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:32.701195   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:32.727727   54101 cri.go:89] found id: ""
	I1212 00:23:32.727741   54101 logs.go:282] 0 containers: []
	W1212 00:23:32.727748   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:32.727753   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:32.727810   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:32.756561   54101 cri.go:89] found id: ""
	I1212 00:23:32.756574   54101 logs.go:282] 0 containers: []
	W1212 00:23:32.756581   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:32.756586   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:32.756648   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:32.781745   54101 cri.go:89] found id: ""
	I1212 00:23:32.781758   54101 logs.go:282] 0 containers: []
	W1212 00:23:32.781765   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:32.781771   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:32.781830   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:32.807544   54101 cri.go:89] found id: ""
	I1212 00:23:32.807558   54101 logs.go:282] 0 containers: []
	W1212 00:23:32.807571   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:32.807576   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:32.807634   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:32.837232   54101 cri.go:89] found id: ""
	I1212 00:23:32.837246   54101 logs.go:282] 0 containers: []
	W1212 00:23:32.837253   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:32.837259   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:32.837321   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:32.864631   54101 cri.go:89] found id: ""
	I1212 00:23:32.864645   54101 logs.go:282] 0 containers: []
	W1212 00:23:32.864660   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:32.864667   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:32.864678   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:32.927240   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:32.919009   12700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:32.919629   12700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:32.921337   12700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:32.921842   12700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:32.923382   12700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:32.919009   12700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:32.919629   12700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:32.921337   12700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:32.921842   12700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:32.923382   12700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:32.927249   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:32.927276   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:32.990198   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:32.990226   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:33.020370   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:33.020389   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:33.077339   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:33.077359   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:35.589167   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:35.599047   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:35.599105   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:35.624300   54101 cri.go:89] found id: ""
	I1212 00:23:35.624315   54101 logs.go:282] 0 containers: []
	W1212 00:23:35.624322   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:35.624327   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:35.624387   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:35.647815   54101 cri.go:89] found id: ""
	I1212 00:23:35.647829   54101 logs.go:282] 0 containers: []
	W1212 00:23:35.647837   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:35.647842   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:35.647900   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:35.676530   54101 cri.go:89] found id: ""
	I1212 00:23:35.676544   54101 logs.go:282] 0 containers: []
	W1212 00:23:35.676551   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:35.676556   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:35.676617   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:35.705816   54101 cri.go:89] found id: ""
	I1212 00:23:35.705831   54101 logs.go:282] 0 containers: []
	W1212 00:23:35.705838   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:35.705844   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:35.705903   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:35.733393   54101 cri.go:89] found id: ""
	I1212 00:23:35.733413   54101 logs.go:282] 0 containers: []
	W1212 00:23:35.733421   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:35.733426   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:35.733485   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:35.757717   54101 cri.go:89] found id: ""
	I1212 00:23:35.757731   54101 logs.go:282] 0 containers: []
	W1212 00:23:35.757738   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:35.757743   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:35.757800   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:35.782446   54101 cri.go:89] found id: ""
	I1212 00:23:35.782459   54101 logs.go:282] 0 containers: []
	W1212 00:23:35.782478   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:35.782487   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:35.782497   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:35.839811   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:35.839828   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:35.850443   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:35.850458   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:35.918359   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:35.910728   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:35.911186   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:35.912701   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:35.913021   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:35.914471   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:35.910728   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:35.911186   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:35.912701   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:35.913021   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:35.914471   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:35.918370   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:35.918382   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:35.980124   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:35.980143   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:38.530800   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:38.542531   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:38.542599   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:38.568754   54101 cri.go:89] found id: ""
	I1212 00:23:38.568767   54101 logs.go:282] 0 containers: []
	W1212 00:23:38.568774   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:38.568788   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:38.568846   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:38.598747   54101 cri.go:89] found id: ""
	I1212 00:23:38.598759   54101 logs.go:282] 0 containers: []
	W1212 00:23:38.598766   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:38.598771   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:38.598838   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:38.623489   54101 cri.go:89] found id: ""
	I1212 00:23:38.623503   54101 logs.go:282] 0 containers: []
	W1212 00:23:38.623519   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:38.623525   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:38.623594   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:38.648000   54101 cri.go:89] found id: ""
	I1212 00:23:38.648013   54101 logs.go:282] 0 containers: []
	W1212 00:23:38.648022   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:38.648027   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:38.648084   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:38.674721   54101 cri.go:89] found id: ""
	I1212 00:23:38.674734   54101 logs.go:282] 0 containers: []
	W1212 00:23:38.674741   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:38.674746   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:38.674808   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:38.700695   54101 cri.go:89] found id: ""
	I1212 00:23:38.700708   54101 logs.go:282] 0 containers: []
	W1212 00:23:38.700715   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:38.700720   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:38.700780   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:38.724873   54101 cri.go:89] found id: ""
	I1212 00:23:38.724886   54101 logs.go:282] 0 containers: []
	W1212 00:23:38.724892   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:38.724900   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:38.724910   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:38.751419   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:38.751434   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:38.807512   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:38.807530   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:38.818972   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:38.819002   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:38.889413   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:38.879843   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:38.881217   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:38.881803   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:38.883544   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:38.884066   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:38.879843   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:38.881217   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:38.881803   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:38.883544   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:38.884066   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:38.889425   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:38.889435   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:41.452716   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:41.462650   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:41.462718   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:41.487241   54101 cri.go:89] found id: ""
	I1212 00:23:41.487264   54101 logs.go:282] 0 containers: []
	W1212 00:23:41.487271   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:41.487277   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:41.487335   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:41.511441   54101 cri.go:89] found id: ""
	I1212 00:23:41.511454   54101 logs.go:282] 0 containers: []
	W1212 00:23:41.511461   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:41.511466   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:41.511523   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:41.560805   54101 cri.go:89] found id: ""
	I1212 00:23:41.560819   54101 logs.go:282] 0 containers: []
	W1212 00:23:41.560826   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:41.560831   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:41.560887   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:41.587388   54101 cri.go:89] found id: ""
	I1212 00:23:41.587402   54101 logs.go:282] 0 containers: []
	W1212 00:23:41.587408   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:41.587413   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:41.587469   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:41.611964   54101 cri.go:89] found id: ""
	I1212 00:23:41.611979   54101 logs.go:282] 0 containers: []
	W1212 00:23:41.611986   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:41.611991   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:41.612051   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:41.637582   54101 cri.go:89] found id: ""
	I1212 00:23:41.637595   54101 logs.go:282] 0 containers: []
	W1212 00:23:41.637601   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:41.637606   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:41.637662   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:41.660916   54101 cri.go:89] found id: ""
	I1212 00:23:41.660939   54101 logs.go:282] 0 containers: []
	W1212 00:23:41.660947   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:41.660955   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:41.660964   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:41.720148   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:41.720165   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:41.730670   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:41.730686   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:41.792978   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:41.784826   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:41.785364   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:41.786819   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:41.787322   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:41.788953   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:41.784826   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:41.785364   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:41.786819   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:41.787322   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:41.788953   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:41.792987   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:41.792997   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:41.853248   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:41.853264   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:44.384182   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:44.394508   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:44.394568   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:44.418597   54101 cri.go:89] found id: ""
	I1212 00:23:44.418612   54101 logs.go:282] 0 containers: []
	W1212 00:23:44.418619   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:44.418624   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:44.418681   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:44.443581   54101 cri.go:89] found id: ""
	I1212 00:23:44.443595   54101 logs.go:282] 0 containers: []
	W1212 00:23:44.443603   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:44.443608   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:44.443665   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:44.468881   54101 cri.go:89] found id: ""
	I1212 00:23:44.468895   54101 logs.go:282] 0 containers: []
	W1212 00:23:44.468902   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:44.468907   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:44.468965   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:44.493396   54101 cri.go:89] found id: ""
	I1212 00:23:44.493410   54101 logs.go:282] 0 containers: []
	W1212 00:23:44.493417   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:44.493422   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:44.493479   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:44.517484   54101 cri.go:89] found id: ""
	I1212 00:23:44.517498   54101 logs.go:282] 0 containers: []
	W1212 00:23:44.517505   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:44.517510   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:44.517570   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:44.550796   54101 cri.go:89] found id: ""
	I1212 00:23:44.550810   54101 logs.go:282] 0 containers: []
	W1212 00:23:44.550817   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:44.550822   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:44.550883   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:44.576925   54101 cri.go:89] found id: ""
	I1212 00:23:44.576938   54101 logs.go:282] 0 containers: []
	W1212 00:23:44.576946   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:44.576954   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:44.576964   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:44.589144   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:44.589160   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:44.657506   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:44.648963   13127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:44.649564   13127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:44.651341   13127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:44.651846   13127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:44.653593   13127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:44.648963   13127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:44.649564   13127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:44.651341   13127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:44.651846   13127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:44.653593   13127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:44.657515   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:44.657526   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:44.718495   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:44.718513   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:44.745494   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:44.745508   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:47.304216   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:47.314254   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:47.314318   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:47.339739   54101 cri.go:89] found id: ""
	I1212 00:23:47.339753   54101 logs.go:282] 0 containers: []
	W1212 00:23:47.339760   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:47.339766   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:47.339822   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:47.364136   54101 cri.go:89] found id: ""
	I1212 00:23:47.364150   54101 logs.go:282] 0 containers: []
	W1212 00:23:47.364157   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:47.364162   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:47.364226   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:47.387941   54101 cri.go:89] found id: ""
	I1212 00:23:47.387957   54101 logs.go:282] 0 containers: []
	W1212 00:23:47.387964   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:47.387969   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:47.388026   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:47.412100   54101 cri.go:89] found id: ""
	I1212 00:23:47.412114   54101 logs.go:282] 0 containers: []
	W1212 00:23:47.412121   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:47.412126   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:47.412187   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:47.437977   54101 cri.go:89] found id: ""
	I1212 00:23:47.437997   54101 logs.go:282] 0 containers: []
	W1212 00:23:47.438005   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:47.438011   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:47.438070   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:47.464751   54101 cri.go:89] found id: ""
	I1212 00:23:47.464765   54101 logs.go:282] 0 containers: []
	W1212 00:23:47.464772   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:47.464778   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:47.464834   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:47.492824   54101 cri.go:89] found id: ""
	I1212 00:23:47.492838   54101 logs.go:282] 0 containers: []
	W1212 00:23:47.492845   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:47.492853   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:47.492863   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:47.549187   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:47.549205   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:47.561345   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:47.561361   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:47.637229   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:47.628185   13238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:47.629565   13238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:47.630374   13238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:47.631980   13238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:47.632725   13238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:47.628185   13238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:47.629565   13238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:47.630374   13238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:47.631980   13238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:47.632725   13238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:47.637238   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:47.637249   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:47.700044   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:47.700063   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:50.232142   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:50.242326   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:50.242389   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:50.267337   54101 cri.go:89] found id: ""
	I1212 00:23:50.267351   54101 logs.go:282] 0 containers: []
	W1212 00:23:50.267359   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:50.267364   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:50.267424   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:50.294402   54101 cri.go:89] found id: ""
	I1212 00:23:50.294416   54101 logs.go:282] 0 containers: []
	W1212 00:23:50.294424   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:50.294428   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:50.294489   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:50.318907   54101 cri.go:89] found id: ""
	I1212 00:23:50.318921   54101 logs.go:282] 0 containers: []
	W1212 00:23:50.318928   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:50.318938   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:50.319041   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:50.344349   54101 cri.go:89] found id: ""
	I1212 00:23:50.344362   54101 logs.go:282] 0 containers: []
	W1212 00:23:50.344370   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:50.344375   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:50.344442   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:50.374529   54101 cri.go:89] found id: ""
	I1212 00:23:50.374543   54101 logs.go:282] 0 containers: []
	W1212 00:23:50.374550   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:50.374556   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:50.374612   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:50.400874   54101 cri.go:89] found id: ""
	I1212 00:23:50.400888   54101 logs.go:282] 0 containers: []
	W1212 00:23:50.400896   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:50.400903   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:50.400977   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:50.428510   54101 cri.go:89] found id: ""
	I1212 00:23:50.428525   54101 logs.go:282] 0 containers: []
	W1212 00:23:50.428533   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:50.428541   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:50.428553   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:50.455528   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:50.455545   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:50.510724   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:50.510743   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:50.521665   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:50.521681   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:50.611401   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:50.603277   13347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:50.603798   13347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:50.605445   13347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:50.605921   13347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:50.607608   13347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:50.603277   13347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:50.603798   13347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:50.605445   13347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:50.605921   13347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:50.607608   13347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:50.611411   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:50.611424   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:53.175490   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:53.185411   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:53.185474   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:53.209584   54101 cri.go:89] found id: ""
	I1212 00:23:53.209597   54101 logs.go:282] 0 containers: []
	W1212 00:23:53.209616   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:53.209628   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:53.209693   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:53.233686   54101 cri.go:89] found id: ""
	I1212 00:23:53.233700   54101 logs.go:282] 0 containers: []
	W1212 00:23:53.233707   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:53.233712   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:53.233774   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:53.257587   54101 cri.go:89] found id: ""
	I1212 00:23:53.257601   54101 logs.go:282] 0 containers: []
	W1212 00:23:53.257608   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:53.257613   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:53.257670   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:53.285867   54101 cri.go:89] found id: ""
	I1212 00:23:53.285880   54101 logs.go:282] 0 containers: []
	W1212 00:23:53.285887   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:53.285892   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:53.285947   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:53.312516   54101 cri.go:89] found id: ""
	I1212 00:23:53.312530   54101 logs.go:282] 0 containers: []
	W1212 00:23:53.312537   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:53.312541   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:53.312599   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:53.336425   54101 cri.go:89] found id: ""
	I1212 00:23:53.336445   54101 logs.go:282] 0 containers: []
	W1212 00:23:53.336452   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:53.336457   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:53.336514   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:53.360258   54101 cri.go:89] found id: ""
	I1212 00:23:53.360271   54101 logs.go:282] 0 containers: []
	W1212 00:23:53.360279   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:53.360287   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:53.360296   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:53.422643   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:53.422660   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:53.451682   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:53.451698   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:53.508302   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:53.508320   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:53.518839   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:53.518855   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:53.608163   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:53.599819   13452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:53.600615   13452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:53.602118   13452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:53.602666   13452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:53.604185   13452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:53.599819   13452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:53.600615   13452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:53.602118   13452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:53.602666   13452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:53.604185   13452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:56.109087   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:56.119165   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:56.119227   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:56.143243   54101 cri.go:89] found id: ""
	I1212 00:23:56.143256   54101 logs.go:282] 0 containers: []
	W1212 00:23:56.143263   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:56.143268   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:56.143326   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:56.168289   54101 cri.go:89] found id: ""
	I1212 00:23:56.168309   54101 logs.go:282] 0 containers: []
	W1212 00:23:56.168316   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:56.168321   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:56.168379   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:56.192149   54101 cri.go:89] found id: ""
	I1212 00:23:56.192163   54101 logs.go:282] 0 containers: []
	W1212 00:23:56.192172   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:56.192177   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:56.192238   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:56.216868   54101 cri.go:89] found id: ""
	I1212 00:23:56.216880   54101 logs.go:282] 0 containers: []
	W1212 00:23:56.216887   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:56.216892   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:56.216954   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:56.241928   54101 cri.go:89] found id: ""
	I1212 00:23:56.241941   54101 logs.go:282] 0 containers: []
	W1212 00:23:56.241951   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:56.241956   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:56.242011   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:56.265468   54101 cri.go:89] found id: ""
	I1212 00:23:56.265481   54101 logs.go:282] 0 containers: []
	W1212 00:23:56.265488   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:56.265493   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:56.265552   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:56.290530   54101 cri.go:89] found id: ""
	I1212 00:23:56.290544   54101 logs.go:282] 0 containers: []
	W1212 00:23:56.290551   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:56.290559   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:56.290569   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:56.345149   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:56.345167   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:56.355854   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:56.355869   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:56.418379   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:56.410553   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:56.411250   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:56.412854   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:56.413395   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:56.414621   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:56.410553   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:56.411250   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:56.412854   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:56.413395   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:56.414621   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:56.418389   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:56.418399   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:56.480524   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:56.480543   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:59.011832   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:59.022048   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:59.022108   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:59.046210   54101 cri.go:89] found id: ""
	I1212 00:23:59.046224   54101 logs.go:282] 0 containers: []
	W1212 00:23:59.046231   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:59.046236   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:59.046299   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:59.071192   54101 cri.go:89] found id: ""
	I1212 00:23:59.071206   54101 logs.go:282] 0 containers: []
	W1212 00:23:59.071213   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:59.071217   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:59.071278   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:59.095678   54101 cri.go:89] found id: ""
	I1212 00:23:59.095692   54101 logs.go:282] 0 containers: []
	W1212 00:23:59.095698   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:59.095703   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:59.095760   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:59.119812   54101 cri.go:89] found id: ""
	I1212 00:23:59.119825   54101 logs.go:282] 0 containers: []
	W1212 00:23:59.119832   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:59.119837   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:59.119897   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:59.143943   54101 cri.go:89] found id: ""
	I1212 00:23:59.143957   54101 logs.go:282] 0 containers: []
	W1212 00:23:59.143964   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:59.143969   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:59.144028   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:59.174483   54101 cri.go:89] found id: ""
	I1212 00:23:59.174506   54101 logs.go:282] 0 containers: []
	W1212 00:23:59.174513   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:59.174519   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:59.174576   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:59.202048   54101 cri.go:89] found id: ""
	I1212 00:23:59.202061   54101 logs.go:282] 0 containers: []
	W1212 00:23:59.202068   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:59.202076   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:59.202087   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:59.257143   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:59.257161   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:59.268235   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:59.268252   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:59.334149   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:59.326488   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:59.326882   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:59.328393   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:59.328789   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:59.330327   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:59.326488   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:59.326882   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:59.328393   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:59.328789   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:59.330327   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:59.334159   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:59.334184   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:59.396366   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:59.396383   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:01.926850   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:01.937253   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:01.937312   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:01.966272   54101 cri.go:89] found id: ""
	I1212 00:24:01.966286   54101 logs.go:282] 0 containers: []
	W1212 00:24:01.966293   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:01.966298   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:01.966359   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:01.991061   54101 cri.go:89] found id: ""
	I1212 00:24:01.991075   54101 logs.go:282] 0 containers: []
	W1212 00:24:01.991082   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:01.991087   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:01.991145   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:02.019646   54101 cri.go:89] found id: ""
	I1212 00:24:02.019661   54101 logs.go:282] 0 containers: []
	W1212 00:24:02.019668   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:02.019673   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:02.019731   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:02.044619   54101 cri.go:89] found id: ""
	I1212 00:24:02.044634   54101 logs.go:282] 0 containers: []
	W1212 00:24:02.044641   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:02.044648   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:02.044704   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:02.069486   54101 cri.go:89] found id: ""
	I1212 00:24:02.069500   54101 logs.go:282] 0 containers: []
	W1212 00:24:02.069508   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:02.069512   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:02.069569   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:02.096887   54101 cri.go:89] found id: ""
	I1212 00:24:02.096901   54101 logs.go:282] 0 containers: []
	W1212 00:24:02.096908   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:02.096913   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:02.096974   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:02.124826   54101 cri.go:89] found id: ""
	I1212 00:24:02.124839   54101 logs.go:282] 0 containers: []
	W1212 00:24:02.124847   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:02.124854   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:02.124864   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:02.152773   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:02.152789   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:02.210656   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:02.210676   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:02.222006   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:02.222022   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:02.293474   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:02.284427   13765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:02.285315   13765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:02.287050   13765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:02.287829   13765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:02.289483   13765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:02.284427   13765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:02.285315   13765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:02.287050   13765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:02.287829   13765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:02.289483   13765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:02.293484   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:02.293499   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:04.860582   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:04.870768   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:04.870829   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:04.896675   54101 cri.go:89] found id: ""
	I1212 00:24:04.896689   54101 logs.go:282] 0 containers: []
	W1212 00:24:04.896696   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:04.896701   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:04.896759   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:04.925636   54101 cri.go:89] found id: ""
	I1212 00:24:04.925651   54101 logs.go:282] 0 containers: []
	W1212 00:24:04.925658   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:04.925664   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:04.925730   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:04.950839   54101 cri.go:89] found id: ""
	I1212 00:24:04.950853   54101 logs.go:282] 0 containers: []
	W1212 00:24:04.950860   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:04.950865   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:04.950922   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:04.976777   54101 cri.go:89] found id: ""
	I1212 00:24:04.976792   54101 logs.go:282] 0 containers: []
	W1212 00:24:04.976799   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:04.976804   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:04.976862   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:05.007523   54101 cri.go:89] found id: ""
	I1212 00:24:05.007538   54101 logs.go:282] 0 containers: []
	W1212 00:24:05.007547   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:05.007552   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:05.007615   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:05.034390   54101 cri.go:89] found id: ""
	I1212 00:24:05.034412   54101 logs.go:282] 0 containers: []
	W1212 00:24:05.034419   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:05.034424   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:05.034492   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:05.060364   54101 cri.go:89] found id: ""
	I1212 00:24:05.060378   54101 logs.go:282] 0 containers: []
	W1212 00:24:05.060385   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:05.060394   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:05.060405   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:05.130824   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:05.122601   13853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:05.123172   13853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:05.124809   13853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:05.125287   13853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:05.126908   13853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:05.122601   13853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:05.123172   13853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:05.124809   13853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:05.125287   13853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:05.126908   13853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:05.130836   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:05.130846   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:05.193088   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:05.193106   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:05.221288   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:05.221305   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:05.280911   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:05.280928   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:07.791957   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:07.803197   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:07.803258   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:07.849866   54101 cri.go:89] found id: ""
	I1212 00:24:07.849879   54101 logs.go:282] 0 containers: []
	W1212 00:24:07.849885   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:07.849890   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:07.849944   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:07.879098   54101 cri.go:89] found id: ""
	I1212 00:24:07.879112   54101 logs.go:282] 0 containers: []
	W1212 00:24:07.879118   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:07.879123   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:07.879180   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:07.903042   54101 cri.go:89] found id: ""
	I1212 00:24:07.903056   54101 logs.go:282] 0 containers: []
	W1212 00:24:07.903063   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:07.903068   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:07.903124   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:07.926973   54101 cri.go:89] found id: ""
	I1212 00:24:07.926986   54101 logs.go:282] 0 containers: []
	W1212 00:24:07.927024   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:07.927029   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:07.927093   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:07.952849   54101 cri.go:89] found id: ""
	I1212 00:24:07.952863   54101 logs.go:282] 0 containers: []
	W1212 00:24:07.952870   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:07.952875   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:07.952937   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:07.976048   54101 cri.go:89] found id: ""
	I1212 00:24:07.976061   54101 logs.go:282] 0 containers: []
	W1212 00:24:07.976068   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:07.976073   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:07.976127   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:08.005144   54101 cri.go:89] found id: ""
	I1212 00:24:08.005157   54101 logs.go:282] 0 containers: []
	W1212 00:24:08.005165   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:08.005173   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:08.005183   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:08.062459   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:08.062477   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:08.073793   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:08.073821   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:08.140014   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:08.132203   13964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:08.132726   13964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:08.134246   13964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:08.134712   13964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:08.136200   13964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:08.132203   13964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:08.132726   13964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:08.134246   13964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:08.134712   13964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:08.136200   13964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:08.140025   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:08.140035   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:08.202051   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:08.202070   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:10.733798   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:10.743998   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:10.744057   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:10.768781   54101 cri.go:89] found id: ""
	I1212 00:24:10.768795   54101 logs.go:282] 0 containers: []
	W1212 00:24:10.768802   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:10.768807   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:10.768871   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:10.811478   54101 cri.go:89] found id: ""
	I1212 00:24:10.811492   54101 logs.go:282] 0 containers: []
	W1212 00:24:10.811499   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:10.811504   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:10.811570   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:10.842339   54101 cri.go:89] found id: ""
	I1212 00:24:10.842358   54101 logs.go:282] 0 containers: []
	W1212 00:24:10.842365   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:10.842370   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:10.842431   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:10.874129   54101 cri.go:89] found id: ""
	I1212 00:24:10.874143   54101 logs.go:282] 0 containers: []
	W1212 00:24:10.874151   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:10.874157   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:10.874217   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:10.898217   54101 cri.go:89] found id: ""
	I1212 00:24:10.898231   54101 logs.go:282] 0 containers: []
	W1212 00:24:10.898244   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:10.898249   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:10.898306   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:10.923360   54101 cri.go:89] found id: ""
	I1212 00:24:10.923374   54101 logs.go:282] 0 containers: []
	W1212 00:24:10.923380   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:10.923385   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:10.923442   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:10.947605   54101 cri.go:89] found id: ""
	I1212 00:24:10.947619   54101 logs.go:282] 0 containers: []
	W1212 00:24:10.947626   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:10.947634   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:10.947645   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:11.006969   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:11.006995   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:11.018264   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:11.018281   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:11.082660   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:11.073705   14067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:11.074224   14067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:11.075940   14067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:11.076685   14067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:11.078178   14067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:11.073705   14067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:11.074224   14067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:11.075940   14067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:11.076685   14067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:11.078178   14067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:11.082671   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:11.082681   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:11.144246   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:11.144263   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:13.671933   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:13.683185   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:13.683253   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:13.708906   54101 cri.go:89] found id: ""
	I1212 00:24:13.708920   54101 logs.go:282] 0 containers: []
	W1212 00:24:13.708927   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:13.708932   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:13.709070   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:13.733465   54101 cri.go:89] found id: ""
	I1212 00:24:13.733479   54101 logs.go:282] 0 containers: []
	W1212 00:24:13.733486   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:13.733491   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:13.733555   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:13.757055   54101 cri.go:89] found id: ""
	I1212 00:24:13.757069   54101 logs.go:282] 0 containers: []
	W1212 00:24:13.757076   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:13.757084   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:13.757142   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:13.781588   54101 cri.go:89] found id: ""
	I1212 00:24:13.781602   54101 logs.go:282] 0 containers: []
	W1212 00:24:13.781609   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:13.781614   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:13.781674   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:13.811312   54101 cri.go:89] found id: ""
	I1212 00:24:13.811325   54101 logs.go:282] 0 containers: []
	W1212 00:24:13.811333   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:13.811337   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:13.811394   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:13.844313   54101 cri.go:89] found id: ""
	I1212 00:24:13.844326   54101 logs.go:282] 0 containers: []
	W1212 00:24:13.844333   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:13.844338   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:13.844421   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:13.868420   54101 cri.go:89] found id: ""
	I1212 00:24:13.868434   54101 logs.go:282] 0 containers: []
	W1212 00:24:13.868441   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:13.868449   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:13.868459   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:13.923519   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:13.923536   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:13.934615   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:13.934631   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:14.000483   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:13.989816   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:13.990515   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:13.992025   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:13.992486   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:13.995350   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:13.989816   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:13.990515   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:13.992025   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:13.992486   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:13.995350   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:14.000493   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:14.000505   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:14.063145   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:14.063165   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:16.593154   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:16.603519   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:16.603584   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:16.632576   54101 cri.go:89] found id: ""
	I1212 00:24:16.632589   54101 logs.go:282] 0 containers: []
	W1212 00:24:16.632596   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:16.632603   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:16.632663   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:16.661504   54101 cri.go:89] found id: ""
	I1212 00:24:16.661518   54101 logs.go:282] 0 containers: []
	W1212 00:24:16.661525   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:16.661530   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:16.661587   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:16.686915   54101 cri.go:89] found id: ""
	I1212 00:24:16.686930   54101 logs.go:282] 0 containers: []
	W1212 00:24:16.686937   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:16.686942   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:16.687035   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:16.711579   54101 cri.go:89] found id: ""
	I1212 00:24:16.711594   54101 logs.go:282] 0 containers: []
	W1212 00:24:16.711601   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:16.711606   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:16.711664   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:16.735976   54101 cri.go:89] found id: ""
	I1212 00:24:16.735990   54101 logs.go:282] 0 containers: []
	W1212 00:24:16.735998   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:16.736003   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:16.736058   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:16.760337   54101 cri.go:89] found id: ""
	I1212 00:24:16.760351   54101 logs.go:282] 0 containers: []
	W1212 00:24:16.760359   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:16.760364   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:16.760429   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:16.787594   54101 cri.go:89] found id: ""
	I1212 00:24:16.787608   54101 logs.go:282] 0 containers: []
	W1212 00:24:16.787625   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:16.787634   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:16.787644   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:16.853787   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:16.853805   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:16.865402   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:16.865418   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:16.934251   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:16.925653   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:16.926416   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:16.928097   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:16.928745   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:16.930355   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:16.925653   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:16.926416   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:16.928097   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:16.928745   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:16.930355   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:16.934261   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:16.934272   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:16.995335   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:16.995360   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:19.530311   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:19.540648   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:19.540711   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:19.573854   54101 cri.go:89] found id: ""
	I1212 00:24:19.573868   54101 logs.go:282] 0 containers: []
	W1212 00:24:19.573875   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:19.573880   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:19.573938   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:19.598830   54101 cri.go:89] found id: ""
	I1212 00:24:19.598850   54101 logs.go:282] 0 containers: []
	W1212 00:24:19.598857   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:19.598862   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:19.598965   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:19.624335   54101 cri.go:89] found id: ""
	I1212 00:24:19.624349   54101 logs.go:282] 0 containers: []
	W1212 00:24:19.624357   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:19.624364   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:19.624451   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:19.650800   54101 cri.go:89] found id: ""
	I1212 00:24:19.650813   54101 logs.go:282] 0 containers: []
	W1212 00:24:19.650820   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:19.650826   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:19.650887   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:19.676025   54101 cri.go:89] found id: ""
	I1212 00:24:19.676038   54101 logs.go:282] 0 containers: []
	W1212 00:24:19.676046   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:19.676051   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:19.676111   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:19.702971   54101 cri.go:89] found id: ""
	I1212 00:24:19.702984   54101 logs.go:282] 0 containers: []
	W1212 00:24:19.703003   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:19.703008   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:19.703066   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:19.727517   54101 cri.go:89] found id: ""
	I1212 00:24:19.727530   54101 logs.go:282] 0 containers: []
	W1212 00:24:19.727537   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:19.727545   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:19.727558   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:19.784930   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:19.784948   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:19.799325   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:19.799340   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:19.872030   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:19.864278   14381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:19.865037   14381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:19.866546   14381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:19.866841   14381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:19.868283   14381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:19.864278   14381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:19.865037   14381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:19.866546   14381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:19.866841   14381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:19.868283   14381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:19.872041   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:19.872052   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:19.934549   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:19.934568   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:22.466009   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:22.476227   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:22.476288   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:22.501678   54101 cri.go:89] found id: ""
	I1212 00:24:22.501705   54101 logs.go:282] 0 containers: []
	W1212 00:24:22.501712   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:22.501717   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:22.501785   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:22.531238   54101 cri.go:89] found id: ""
	I1212 00:24:22.531251   54101 logs.go:282] 0 containers: []
	W1212 00:24:22.531258   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:22.531263   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:22.531321   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:22.554936   54101 cri.go:89] found id: ""
	I1212 00:24:22.554949   54101 logs.go:282] 0 containers: []
	W1212 00:24:22.554956   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:22.554962   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:22.555055   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:22.582980   54101 cri.go:89] found id: ""
	I1212 00:24:22.583017   54101 logs.go:282] 0 containers: []
	W1212 00:24:22.583025   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:22.583030   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:22.583094   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:22.608038   54101 cri.go:89] found id: ""
	I1212 00:24:22.608051   54101 logs.go:282] 0 containers: []
	W1212 00:24:22.608069   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:22.608074   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:22.608134   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:22.631929   54101 cri.go:89] found id: ""
	I1212 00:24:22.631942   54101 logs.go:282] 0 containers: []
	W1212 00:24:22.631959   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:22.631965   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:22.632035   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:22.660069   54101 cri.go:89] found id: ""
	I1212 00:24:22.660083   54101 logs.go:282] 0 containers: []
	W1212 00:24:22.660090   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:22.660107   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:22.660118   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:22.722675   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:22.714219   14476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:22.714970   14476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:22.716604   14476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:22.716888   14476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:22.718358   14476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:22.714219   14476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:22.714970   14476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:22.716604   14476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:22.716888   14476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:22.718358   14476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:22.722685   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:22.722695   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:22.783718   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:22.783736   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:22.815064   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:22.815082   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:22.876099   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:22.876117   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:25.389270   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:25.399208   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:25.399264   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:25.423023   54101 cri.go:89] found id: ""
	I1212 00:24:25.423036   54101 logs.go:282] 0 containers: []
	W1212 00:24:25.423043   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:25.423048   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:25.423110   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:25.447118   54101 cri.go:89] found id: ""
	I1212 00:24:25.447132   54101 logs.go:282] 0 containers: []
	W1212 00:24:25.447140   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:25.447145   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:25.447203   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:25.471506   54101 cri.go:89] found id: ""
	I1212 00:24:25.471520   54101 logs.go:282] 0 containers: []
	W1212 00:24:25.471527   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:25.471532   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:25.471588   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:25.496289   54101 cri.go:89] found id: ""
	I1212 00:24:25.496302   54101 logs.go:282] 0 containers: []
	W1212 00:24:25.496310   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:25.496315   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:25.496371   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:25.521055   54101 cri.go:89] found id: ""
	I1212 00:24:25.521068   54101 logs.go:282] 0 containers: []
	W1212 00:24:25.521075   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:25.521080   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:25.521136   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:25.545427   54101 cri.go:89] found id: ""
	I1212 00:24:25.545441   54101 logs.go:282] 0 containers: []
	W1212 00:24:25.545448   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:25.545453   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:25.545509   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:25.573059   54101 cri.go:89] found id: ""
	I1212 00:24:25.573073   54101 logs.go:282] 0 containers: []
	W1212 00:24:25.573080   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:25.573088   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:25.573098   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:25.627642   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:25.627661   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:25.638176   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:25.638192   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:25.702262   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:25.692958   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:25.693521   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:25.695662   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:25.696870   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:25.697262   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:25.692958   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:25.693521   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:25.695662   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:25.696870   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:25.697262   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:25.702271   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:25.702283   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:25.768032   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:25.768050   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:28.306236   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:28.316297   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:28.316366   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:28.339825   54101 cri.go:89] found id: ""
	I1212 00:24:28.339838   54101 logs.go:282] 0 containers: []
	W1212 00:24:28.339855   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:28.339860   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:28.339930   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:28.364813   54101 cri.go:89] found id: ""
	I1212 00:24:28.364826   54101 logs.go:282] 0 containers: []
	W1212 00:24:28.364832   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:28.364837   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:28.364902   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:28.398903   54101 cri.go:89] found id: ""
	I1212 00:24:28.398917   54101 logs.go:282] 0 containers: []
	W1212 00:24:28.398923   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:28.398928   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:28.398985   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:28.424563   54101 cri.go:89] found id: ""
	I1212 00:24:28.424577   54101 logs.go:282] 0 containers: []
	W1212 00:24:28.424584   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:28.424595   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:28.424652   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:28.448511   54101 cri.go:89] found id: ""
	I1212 00:24:28.448524   54101 logs.go:282] 0 containers: []
	W1212 00:24:28.448531   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:28.448536   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:28.448595   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:28.473282   54101 cri.go:89] found id: ""
	I1212 00:24:28.473295   54101 logs.go:282] 0 containers: []
	W1212 00:24:28.473303   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:28.473308   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:28.473364   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:28.496850   54101 cri.go:89] found id: ""
	I1212 00:24:28.496864   54101 logs.go:282] 0 containers: []
	W1212 00:24:28.496871   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:28.496879   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:28.496889   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:28.563054   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:28.554678   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:28.555432   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:28.557227   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:28.557770   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:28.559159   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:28.554678   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:28.555432   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:28.557227   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:28.557770   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:28.559159   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:28.563064   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:28.563076   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:28.625015   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:28.625034   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:28.656873   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:28.656887   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:28.714792   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:28.714811   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:31.225710   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:31.235567   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:31.235633   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:31.259473   54101 cri.go:89] found id: ""
	I1212 00:24:31.259487   54101 logs.go:282] 0 containers: []
	W1212 00:24:31.259494   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:31.259499   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:31.259556   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:31.284058   54101 cri.go:89] found id: ""
	I1212 00:24:31.284070   54101 logs.go:282] 0 containers: []
	W1212 00:24:31.284077   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:31.284082   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:31.284138   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:31.306894   54101 cri.go:89] found id: ""
	I1212 00:24:31.306907   54101 logs.go:282] 0 containers: []
	W1212 00:24:31.306914   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:31.306918   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:31.306978   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:31.334534   54101 cri.go:89] found id: ""
	I1212 00:24:31.334547   54101 logs.go:282] 0 containers: []
	W1212 00:24:31.334554   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:31.334559   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:31.334615   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:31.359236   54101 cri.go:89] found id: ""
	I1212 00:24:31.359250   54101 logs.go:282] 0 containers: []
	W1212 00:24:31.359258   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:31.359263   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:31.359321   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:31.383234   54101 cri.go:89] found id: ""
	I1212 00:24:31.383247   54101 logs.go:282] 0 containers: []
	W1212 00:24:31.383254   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:31.383259   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:31.383314   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:31.407612   54101 cri.go:89] found id: ""
	I1212 00:24:31.407624   54101 logs.go:282] 0 containers: []
	W1212 00:24:31.407631   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:31.407638   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:31.407650   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:31.470123   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:31.470142   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:31.497215   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:31.497231   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:31.553428   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:31.553445   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:31.564292   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:31.564307   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:31.630782   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:31.622216   14808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:31.622767   14808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:31.624650   14808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:31.625108   14808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:31.626798   14808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:31.622216   14808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:31.622767   14808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:31.624650   14808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:31.625108   14808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:31.626798   14808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:34.131141   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:34.141238   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:34.141296   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:34.166032   54101 cri.go:89] found id: ""
	I1212 00:24:34.166045   54101 logs.go:282] 0 containers: []
	W1212 00:24:34.166053   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:34.166057   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:34.166117   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:34.192065   54101 cri.go:89] found id: ""
	I1212 00:24:34.192079   54101 logs.go:282] 0 containers: []
	W1212 00:24:34.192086   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:34.192091   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:34.192146   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:34.216626   54101 cri.go:89] found id: ""
	I1212 00:24:34.216640   54101 logs.go:282] 0 containers: []
	W1212 00:24:34.216646   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:34.216652   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:34.216710   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:34.244975   54101 cri.go:89] found id: ""
	I1212 00:24:34.244989   54101 logs.go:282] 0 containers: []
	W1212 00:24:34.244997   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:34.245002   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:34.245058   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:34.269781   54101 cri.go:89] found id: ""
	I1212 00:24:34.269795   54101 logs.go:282] 0 containers: []
	W1212 00:24:34.269802   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:34.269807   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:34.269867   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:34.294651   54101 cri.go:89] found id: ""
	I1212 00:24:34.294664   54101 logs.go:282] 0 containers: []
	W1212 00:24:34.294672   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:34.294677   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:34.294740   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:34.319772   54101 cri.go:89] found id: ""
	I1212 00:24:34.319786   54101 logs.go:282] 0 containers: []
	W1212 00:24:34.319793   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:34.319801   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:34.319811   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:34.385955   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:34.377894   14892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:34.378715   14892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:34.380217   14892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:34.380694   14892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:34.382158   14892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:34.377894   14892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:34.378715   14892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:34.380217   14892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:34.380694   14892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:34.382158   14892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:34.385966   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:34.385976   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:34.451474   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:34.451493   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:34.478755   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:34.478770   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:34.538195   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:34.538217   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:37.049062   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:37.060494   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:37.060558   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:37.096756   54101 cri.go:89] found id: ""
	I1212 00:24:37.096769   54101 logs.go:282] 0 containers: []
	W1212 00:24:37.096776   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:37.096781   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:37.096857   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:37.123426   54101 cri.go:89] found id: ""
	I1212 00:24:37.123441   54101 logs.go:282] 0 containers: []
	W1212 00:24:37.123448   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:37.123453   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:37.123515   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:37.150366   54101 cri.go:89] found id: ""
	I1212 00:24:37.150379   54101 logs.go:282] 0 containers: []
	W1212 00:24:37.150387   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:37.150392   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:37.150455   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:37.176266   54101 cri.go:89] found id: ""
	I1212 00:24:37.176281   54101 logs.go:282] 0 containers: []
	W1212 00:24:37.176288   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:37.176293   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:37.176379   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:37.211184   54101 cri.go:89] found id: ""
	I1212 00:24:37.211198   54101 logs.go:282] 0 containers: []
	W1212 00:24:37.211205   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:37.211210   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:37.211278   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:37.235978   54101 cri.go:89] found id: ""
	I1212 00:24:37.235992   54101 logs.go:282] 0 containers: []
	W1212 00:24:37.235999   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:37.236005   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:37.236064   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:37.261068   54101 cri.go:89] found id: ""
	I1212 00:24:37.261082   54101 logs.go:282] 0 containers: []
	W1212 00:24:37.261089   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:37.261097   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:37.261107   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:37.318643   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:37.318661   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:37.329758   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:37.329780   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:37.396581   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:37.388347   15001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:37.388766   15001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:37.390448   15001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:37.390869   15001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:37.392485   15001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:37.388347   15001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:37.388766   15001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:37.390448   15001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:37.390869   15001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:37.392485   15001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:37.396591   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:37.396602   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:37.463371   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:37.463399   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:39.999532   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:40.021164   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:40.021239   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:40.055893   54101 cri.go:89] found id: ""
	I1212 00:24:40.055908   54101 logs.go:282] 0 containers: []
	W1212 00:24:40.055916   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:40.055921   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:40.055984   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:40.085805   54101 cri.go:89] found id: ""
	I1212 00:24:40.085821   54101 logs.go:282] 0 containers: []
	W1212 00:24:40.085831   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:40.085837   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:40.085902   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:40.113784   54101 cri.go:89] found id: ""
	I1212 00:24:40.113797   54101 logs.go:282] 0 containers: []
	W1212 00:24:40.113804   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:40.113809   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:40.113867   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:40.141930   54101 cri.go:89] found id: ""
	I1212 00:24:40.141945   54101 logs.go:282] 0 containers: []
	W1212 00:24:40.141954   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:40.141959   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:40.142018   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:40.168489   54101 cri.go:89] found id: ""
	I1212 00:24:40.168503   54101 logs.go:282] 0 containers: []
	W1212 00:24:40.168510   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:40.168515   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:40.168575   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:40.195479   54101 cri.go:89] found id: ""
	I1212 00:24:40.195494   54101 logs.go:282] 0 containers: []
	W1212 00:24:40.195501   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:40.195506   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:40.195572   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:40.225277   54101 cri.go:89] found id: ""
	I1212 00:24:40.225290   54101 logs.go:282] 0 containers: []
	W1212 00:24:40.225297   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:40.225305   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:40.225315   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:40.288821   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:40.280605   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:40.281157   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:40.282725   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:40.283252   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:40.284776   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:40.280605   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:40.281157   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:40.282725   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:40.283252   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:40.284776   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:40.288833   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:40.288842   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:40.351250   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:40.351269   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:40.379379   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:40.379395   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:40.435768   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:40.435785   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:42.948581   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:42.958923   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:42.958983   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:42.983729   54101 cri.go:89] found id: ""
	I1212 00:24:42.983743   54101 logs.go:282] 0 containers: []
	W1212 00:24:42.983757   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:42.983762   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:42.983823   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:43.015682   54101 cri.go:89] found id: ""
	I1212 00:24:43.015696   54101 logs.go:282] 0 containers: []
	W1212 00:24:43.015703   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:43.015708   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:43.015767   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:43.051631   54101 cri.go:89] found id: ""
	I1212 00:24:43.051644   54101 logs.go:282] 0 containers: []
	W1212 00:24:43.051658   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:43.051662   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:43.051723   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:43.088521   54101 cri.go:89] found id: ""
	I1212 00:24:43.088535   54101 logs.go:282] 0 containers: []
	W1212 00:24:43.088542   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:43.088547   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:43.088606   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:43.120828   54101 cri.go:89] found id: ""
	I1212 00:24:43.120842   54101 logs.go:282] 0 containers: []
	W1212 00:24:43.120848   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:43.120854   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:43.120916   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:43.146768   54101 cri.go:89] found id: ""
	I1212 00:24:43.146782   54101 logs.go:282] 0 containers: []
	W1212 00:24:43.146789   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:43.146794   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:43.146877   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:43.172067   54101 cri.go:89] found id: ""
	I1212 00:24:43.172081   54101 logs.go:282] 0 containers: []
	W1212 00:24:43.172089   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:43.172097   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:43.172107   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:43.183115   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:43.183131   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:43.245564   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:43.237027   15207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:43.237641   15207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:43.239314   15207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:43.239878   15207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:43.241570   15207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:43.237027   15207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:43.237641   15207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:43.239314   15207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:43.239878   15207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:43.241570   15207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:43.245574   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:43.245585   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:43.307071   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:43.307092   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:43.334124   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:43.334141   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:45.892688   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:45.902643   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:45.902701   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:45.927418   54101 cri.go:89] found id: ""
	I1212 00:24:45.927432   54101 logs.go:282] 0 containers: []
	W1212 00:24:45.927439   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:45.927444   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:45.927504   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:45.950969   54101 cri.go:89] found id: ""
	I1212 00:24:45.950982   54101 logs.go:282] 0 containers: []
	W1212 00:24:45.951005   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:45.951011   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:45.951068   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:45.977037   54101 cri.go:89] found id: ""
	I1212 00:24:45.977050   54101 logs.go:282] 0 containers: []
	W1212 00:24:45.977057   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:45.977062   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:45.977127   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:46.003570   54101 cri.go:89] found id: ""
	I1212 00:24:46.003587   54101 logs.go:282] 0 containers: []
	W1212 00:24:46.003594   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:46.003600   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:46.003668   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:46.035920   54101 cri.go:89] found id: ""
	I1212 00:24:46.035934   54101 logs.go:282] 0 containers: []
	W1212 00:24:46.035941   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:46.035946   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:46.036003   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:46.073828   54101 cri.go:89] found id: ""
	I1212 00:24:46.073842   54101 logs.go:282] 0 containers: []
	W1212 00:24:46.073849   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:46.073854   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:46.073911   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:46.106173   54101 cri.go:89] found id: ""
	I1212 00:24:46.106194   54101 logs.go:282] 0 containers: []
	W1212 00:24:46.106218   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:46.106226   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:46.106239   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:46.162624   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:46.162643   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:46.173580   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:46.173602   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:46.238544   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:46.230296   15317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:46.230879   15317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:46.232549   15317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:46.233036   15317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:46.234601   15317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:46.230296   15317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:46.230879   15317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:46.232549   15317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:46.233036   15317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:46.234601   15317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:46.238555   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:46.238566   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:46.301177   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:46.301195   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:48.831063   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:48.843168   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:48.843226   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:48.871581   54101 cri.go:89] found id: ""
	I1212 00:24:48.871598   54101 logs.go:282] 0 containers: []
	W1212 00:24:48.871605   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:48.871610   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:48.871669   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:48.896221   54101 cri.go:89] found id: ""
	I1212 00:24:48.896236   54101 logs.go:282] 0 containers: []
	W1212 00:24:48.896244   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:48.896249   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:48.896307   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:48.920455   54101 cri.go:89] found id: ""
	I1212 00:24:48.920475   54101 logs.go:282] 0 containers: []
	W1212 00:24:48.920483   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:48.920488   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:48.920550   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:48.944730   54101 cri.go:89] found id: ""
	I1212 00:24:48.944743   54101 logs.go:282] 0 containers: []
	W1212 00:24:48.944750   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:48.944755   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:48.944815   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:48.969159   54101 cri.go:89] found id: ""
	I1212 00:24:48.969172   54101 logs.go:282] 0 containers: []
	W1212 00:24:48.969179   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:48.969184   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:48.969238   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:49.001344   54101 cri.go:89] found id: ""
	I1212 00:24:49.001360   54101 logs.go:282] 0 containers: []
	W1212 00:24:49.001368   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:49.001373   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:49.001440   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:49.026664   54101 cri.go:89] found id: ""
	I1212 00:24:49.026688   54101 logs.go:282] 0 containers: []
	W1212 00:24:49.026696   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:49.026704   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:49.026715   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:49.088266   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:49.088284   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:49.099424   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:49.099438   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:49.166422   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:49.157832   15425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:49.158583   15425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:49.160190   15425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:49.160890   15425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:49.162624   15425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:49.157832   15425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:49.158583   15425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:49.160190   15425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:49.160890   15425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:49.162624   15425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:49.166432   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:49.166445   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:49.227337   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:49.227355   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:51.758903   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:51.768725   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:51.768786   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:51.792403   54101 cri.go:89] found id: ""
	I1212 00:24:51.792417   54101 logs.go:282] 0 containers: []
	W1212 00:24:51.792424   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:51.792429   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:51.792497   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:51.819996   54101 cri.go:89] found id: ""
	I1212 00:24:51.820010   54101 logs.go:282] 0 containers: []
	W1212 00:24:51.820016   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:51.820021   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:51.820080   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:51.844706   54101 cri.go:89] found id: ""
	I1212 00:24:51.844719   54101 logs.go:282] 0 containers: []
	W1212 00:24:51.844727   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:51.844732   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:51.844800   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:51.870289   54101 cri.go:89] found id: ""
	I1212 00:24:51.870303   54101 logs.go:282] 0 containers: []
	W1212 00:24:51.870316   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:51.870321   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:51.870378   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:51.894116   54101 cri.go:89] found id: ""
	I1212 00:24:51.894129   54101 logs.go:282] 0 containers: []
	W1212 00:24:51.894137   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:51.894142   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:51.894200   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:51.918453   54101 cri.go:89] found id: ""
	I1212 00:24:51.918467   54101 logs.go:282] 0 containers: []
	W1212 00:24:51.918474   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:51.918480   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:51.918538   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:51.942207   54101 cri.go:89] found id: ""
	I1212 00:24:51.942220   54101 logs.go:282] 0 containers: []
	W1212 00:24:51.942228   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:51.942235   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:51.942245   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:51.970818   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:51.970835   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:52.026675   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:52.026692   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:52.044175   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:52.044191   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:52.123266   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:52.114940   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:52.115962   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:52.117604   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:52.118040   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:52.119539   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:52.114940   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:52.115962   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:52.117604   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:52.118040   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:52.119539   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:52.123275   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:52.123286   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:54.689949   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:54.700000   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:54.700070   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:54.725625   54101 cri.go:89] found id: ""
	I1212 00:24:54.725638   54101 logs.go:282] 0 containers: []
	W1212 00:24:54.725645   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:54.725650   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:54.725716   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:54.748579   54101 cri.go:89] found id: ""
	I1212 00:24:54.748592   54101 logs.go:282] 0 containers: []
	W1212 00:24:54.748600   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:54.748604   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:54.748661   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:54.772796   54101 cri.go:89] found id: ""
	I1212 00:24:54.772809   54101 logs.go:282] 0 containers: []
	W1212 00:24:54.772816   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:54.772821   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:54.772876   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:54.797082   54101 cri.go:89] found id: ""
	I1212 00:24:54.797095   54101 logs.go:282] 0 containers: []
	W1212 00:24:54.797102   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:54.797107   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:54.797168   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:54.821359   54101 cri.go:89] found id: ""
	I1212 00:24:54.821372   54101 logs.go:282] 0 containers: []
	W1212 00:24:54.821379   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:54.821384   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:54.821441   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:54.848911   54101 cri.go:89] found id: ""
	I1212 00:24:54.848924   54101 logs.go:282] 0 containers: []
	W1212 00:24:54.848931   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:54.848936   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:54.848993   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:54.872383   54101 cri.go:89] found id: ""
	I1212 00:24:54.872397   54101 logs.go:282] 0 containers: []
	W1212 00:24:54.872404   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:54.872412   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:54.872422   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:54.927404   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:54.927423   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:54.938083   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:54.938099   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:55.013009   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:54.998953   15629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:55.000234   15629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:55.001265   15629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:55.004572   15629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:55.007712   15629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:54.998953   15629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:55.000234   15629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:55.001265   15629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:55.004572   15629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:55.007712   15629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:55.013021   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:55.013032   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:55.084355   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:55.084375   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:57.624991   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:57.635207   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:57.635270   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:57.662282   54101 cri.go:89] found id: ""
	I1212 00:24:57.662296   54101 logs.go:282] 0 containers: []
	W1212 00:24:57.662304   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:57.662309   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:57.662365   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:57.692048   54101 cri.go:89] found id: ""
	I1212 00:24:57.692061   54101 logs.go:282] 0 containers: []
	W1212 00:24:57.692068   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:57.692073   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:57.692128   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:57.717665   54101 cri.go:89] found id: ""
	I1212 00:24:57.717679   54101 logs.go:282] 0 containers: []
	W1212 00:24:57.717686   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:57.717692   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:57.717752   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:57.746206   54101 cri.go:89] found id: ""
	I1212 00:24:57.746219   54101 logs.go:282] 0 containers: []
	W1212 00:24:57.746226   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:57.746233   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:57.746291   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:57.772883   54101 cri.go:89] found id: ""
	I1212 00:24:57.772896   54101 logs.go:282] 0 containers: []
	W1212 00:24:57.772904   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:57.772909   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:57.772969   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:57.796550   54101 cri.go:89] found id: ""
	I1212 00:24:57.796564   54101 logs.go:282] 0 containers: []
	W1212 00:24:57.796571   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:57.796576   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:57.796636   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:57.819457   54101 cri.go:89] found id: ""
	I1212 00:24:57.819470   54101 logs.go:282] 0 containers: []
	W1212 00:24:57.819481   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:57.819489   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:57.819499   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:57.848789   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:57.848804   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:57.903379   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:57.903404   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:57.914134   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:57.914150   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:57.981734   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:57.973800   15746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:57.974813   15746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:57.975633   15746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:57.976681   15746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:57.977401   15746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:57.973800   15746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:57.974813   15746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:57.975633   15746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:57.976681   15746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:57.977401   15746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:57.981743   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:57.981764   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:00.548466   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:00.559868   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:00.559941   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:00.588355   54101 cri.go:89] found id: ""
	I1212 00:25:00.588369   54101 logs.go:282] 0 containers: []
	W1212 00:25:00.588377   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:00.588383   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:00.588446   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:00.615059   54101 cri.go:89] found id: ""
	I1212 00:25:00.615073   54101 logs.go:282] 0 containers: []
	W1212 00:25:00.615080   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:00.615085   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:00.615144   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:00.642285   54101 cri.go:89] found id: ""
	I1212 00:25:00.642299   54101 logs.go:282] 0 containers: []
	W1212 00:25:00.642307   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:00.642312   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:00.642370   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:00.670680   54101 cri.go:89] found id: ""
	I1212 00:25:00.670693   54101 logs.go:282] 0 containers: []
	W1212 00:25:00.670701   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:00.670706   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:00.670766   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:00.696244   54101 cri.go:89] found id: ""
	I1212 00:25:00.696258   54101 logs.go:282] 0 containers: []
	W1212 00:25:00.696266   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:00.696271   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:00.696386   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:00.725727   54101 cri.go:89] found id: ""
	I1212 00:25:00.725741   54101 logs.go:282] 0 containers: []
	W1212 00:25:00.725758   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:00.725764   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:00.725844   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:00.754004   54101 cri.go:89] found id: ""
	I1212 00:25:00.754018   54101 logs.go:282] 0 containers: []
	W1212 00:25:00.754025   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:00.754032   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:00.754044   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:00.766092   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:00.766108   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:00.830876   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:00.822487   15839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:00.823145   15839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:00.824701   15839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:00.825291   15839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:00.826797   15839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:00.822487   15839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:00.823145   15839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:00.824701   15839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:00.825291   15839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:00.826797   15839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:00.830886   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:00.830899   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:00.893247   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:00.893265   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:00.920729   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:00.920744   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:03.481388   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:03.491775   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:03.491838   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:03.521216   54101 cri.go:89] found id: ""
	I1212 00:25:03.521230   54101 logs.go:282] 0 containers: []
	W1212 00:25:03.521238   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:03.521243   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:03.521304   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:03.549226   54101 cri.go:89] found id: ""
	I1212 00:25:03.549240   54101 logs.go:282] 0 containers: []
	W1212 00:25:03.549247   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:03.549258   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:03.549315   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:03.577069   54101 cri.go:89] found id: ""
	I1212 00:25:03.577083   54101 logs.go:282] 0 containers: []
	W1212 00:25:03.577090   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:03.577097   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:03.577156   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:03.606566   54101 cri.go:89] found id: ""
	I1212 00:25:03.606580   54101 logs.go:282] 0 containers: []
	W1212 00:25:03.606587   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:03.606592   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:03.606652   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:03.631034   54101 cri.go:89] found id: ""
	I1212 00:25:03.631049   54101 logs.go:282] 0 containers: []
	W1212 00:25:03.631057   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:03.631062   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:03.631125   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:03.655850   54101 cri.go:89] found id: ""
	I1212 00:25:03.655864   54101 logs.go:282] 0 containers: []
	W1212 00:25:03.655871   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:03.655876   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:03.655951   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:03.682159   54101 cri.go:89] found id: ""
	I1212 00:25:03.682173   54101 logs.go:282] 0 containers: []
	W1212 00:25:03.682180   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:03.682187   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:03.682200   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:03.692956   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:03.692973   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:03.759732   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:03.751026   15944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:03.751692   15944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:03.753437   15944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:03.754061   15944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:03.755694   15944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:03.751026   15944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:03.751692   15944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:03.753437   15944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:03.754061   15944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:03.755694   15944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:03.759743   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:03.759754   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:03.821448   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:03.821467   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:03.854174   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:03.854191   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:06.412785   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:06.423128   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:06.423192   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:06.451062   54101 cri.go:89] found id: ""
	I1212 00:25:06.451075   54101 logs.go:282] 0 containers: []
	W1212 00:25:06.451082   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:06.451087   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:06.451145   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:06.476861   54101 cri.go:89] found id: ""
	I1212 00:25:06.476875   54101 logs.go:282] 0 containers: []
	W1212 00:25:06.476882   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:06.476888   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:06.476956   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:06.502250   54101 cri.go:89] found id: ""
	I1212 00:25:06.502277   54101 logs.go:282] 0 containers: []
	W1212 00:25:06.502284   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:06.502295   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:06.502363   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:06.527789   54101 cri.go:89] found id: ""
	I1212 00:25:06.527803   54101 logs.go:282] 0 containers: []
	W1212 00:25:06.527810   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:06.527816   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:06.527876   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:06.552928   54101 cri.go:89] found id: ""
	I1212 00:25:06.552942   54101 logs.go:282] 0 containers: []
	W1212 00:25:06.552950   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:06.552956   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:06.553015   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:06.580455   54101 cri.go:89] found id: ""
	I1212 00:25:06.580468   54101 logs.go:282] 0 containers: []
	W1212 00:25:06.580475   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:06.580481   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:06.580541   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:06.605618   54101 cri.go:89] found id: ""
	I1212 00:25:06.605632   54101 logs.go:282] 0 containers: []
	W1212 00:25:06.605640   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:06.605656   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:06.605667   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:06.661856   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:06.661873   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:06.673040   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:06.673057   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:06.744531   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:06.737026   16050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:06.737431   16050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:06.738919   16050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:06.739260   16050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:06.740703   16050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:06.737026   16050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:06.737431   16050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:06.738919   16050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:06.739260   16050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:06.740703   16050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:06.744541   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:06.744552   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:06.810963   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:06.810982   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:09.340882   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:09.351148   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:09.351207   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:09.376060   54101 cri.go:89] found id: ""
	I1212 00:25:09.376074   54101 logs.go:282] 0 containers: []
	W1212 00:25:09.376081   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:09.376086   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:09.376144   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:09.401509   54101 cri.go:89] found id: ""
	I1212 00:25:09.401524   54101 logs.go:282] 0 containers: []
	W1212 00:25:09.401532   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:09.401537   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:09.401594   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:09.430682   54101 cri.go:89] found id: ""
	I1212 00:25:09.430697   54101 logs.go:282] 0 containers: []
	W1212 00:25:09.430704   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:09.430709   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:09.430779   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:09.455570   54101 cri.go:89] found id: ""
	I1212 00:25:09.455583   54101 logs.go:282] 0 containers: []
	W1212 00:25:09.455590   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:09.455596   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:09.455652   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:09.480221   54101 cri.go:89] found id: ""
	I1212 00:25:09.480234   54101 logs.go:282] 0 containers: []
	W1212 00:25:09.480251   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:09.480257   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:09.480312   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:09.504553   54101 cri.go:89] found id: ""
	I1212 00:25:09.504566   54101 logs.go:282] 0 containers: []
	W1212 00:25:09.504573   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:09.504578   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:09.504634   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:09.529091   54101 cri.go:89] found id: ""
	I1212 00:25:09.529105   54101 logs.go:282] 0 containers: []
	W1212 00:25:09.529111   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:09.529119   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:09.529129   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:09.590147   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:09.590169   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:09.616705   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:09.616720   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:09.674296   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:09.674314   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:09.685008   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:09.685023   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:09.747995   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:09.740039   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:09.740945   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:09.742442   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:09.742752   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:09.744216   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:09.740039   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:09.740945   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:09.742442   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:09.742752   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:09.744216   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:12.248240   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:12.258577   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:12.258636   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:12.296410   54101 cri.go:89] found id: ""
	I1212 00:25:12.296425   54101 logs.go:282] 0 containers: []
	W1212 00:25:12.296432   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:12.296438   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:12.296495   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:12.322054   54101 cri.go:89] found id: ""
	I1212 00:25:12.322069   54101 logs.go:282] 0 containers: []
	W1212 00:25:12.322076   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:12.322081   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:12.322137   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:12.354557   54101 cri.go:89] found id: ""
	I1212 00:25:12.354570   54101 logs.go:282] 0 containers: []
	W1212 00:25:12.354577   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:12.354582   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:12.354643   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:12.379214   54101 cri.go:89] found id: ""
	I1212 00:25:12.379228   54101 logs.go:282] 0 containers: []
	W1212 00:25:12.379235   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:12.379240   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:12.379297   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:12.403239   54101 cri.go:89] found id: ""
	I1212 00:25:12.403253   54101 logs.go:282] 0 containers: []
	W1212 00:25:12.403261   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:12.403266   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:12.403325   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:12.429024   54101 cri.go:89] found id: ""
	I1212 00:25:12.429039   54101 logs.go:282] 0 containers: []
	W1212 00:25:12.429052   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:12.429058   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:12.429117   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:12.454240   54101 cri.go:89] found id: ""
	I1212 00:25:12.454253   54101 logs.go:282] 0 containers: []
	W1212 00:25:12.454260   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:12.454268   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:12.454279   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:12.465168   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:12.465185   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:12.530196   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:12.522373   16258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:12.522762   16258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:12.524330   16258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:12.524677   16258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:12.526171   16258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:12.522373   16258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:12.522762   16258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:12.524330   16258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:12.524677   16258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:12.526171   16258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:12.530207   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:12.530218   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:12.596659   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:12.596686   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:12.629646   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:12.629666   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:15.188117   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:15.198184   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:15.198246   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:15.222760   54101 cri.go:89] found id: ""
	I1212 00:25:15.222774   54101 logs.go:282] 0 containers: []
	W1212 00:25:15.222781   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:15.222786   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:15.222841   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:15.247134   54101 cri.go:89] found id: ""
	I1212 00:25:15.247149   54101 logs.go:282] 0 containers: []
	W1212 00:25:15.247156   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:15.247161   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:15.247220   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:15.273493   54101 cri.go:89] found id: ""
	I1212 00:25:15.273506   54101 logs.go:282] 0 containers: []
	W1212 00:25:15.273513   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:15.273518   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:15.273575   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:15.325769   54101 cri.go:89] found id: ""
	I1212 00:25:15.325782   54101 logs.go:282] 0 containers: []
	W1212 00:25:15.325790   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:15.325794   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:15.325851   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:15.352564   54101 cri.go:89] found id: ""
	I1212 00:25:15.352578   54101 logs.go:282] 0 containers: []
	W1212 00:25:15.352589   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:15.352594   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:15.352652   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:15.381006   54101 cri.go:89] found id: ""
	I1212 00:25:15.381025   54101 logs.go:282] 0 containers: []
	W1212 00:25:15.381032   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:15.381037   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:15.381094   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:15.404889   54101 cri.go:89] found id: ""
	I1212 00:25:15.404903   54101 logs.go:282] 0 containers: []
	W1212 00:25:15.404910   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:15.404917   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:15.404936   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:15.472619   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:15.464098   16363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:15.465350   16363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:15.466018   16363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:15.467674   16363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:15.468107   16363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:15.464098   16363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:15.465350   16363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:15.466018   16363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:15.467674   16363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:15.468107   16363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:15.472631   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:15.472643   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:15.533279   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:15.533297   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:15.563170   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:15.563185   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:15.622483   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:15.622499   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:18.135301   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:18.145599   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:18.145657   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:18.170223   54101 cri.go:89] found id: ""
	I1212 00:25:18.170237   54101 logs.go:282] 0 containers: []
	W1212 00:25:18.170245   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:18.170250   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:18.170317   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:18.194981   54101 cri.go:89] found id: ""
	I1212 00:25:18.195034   54101 logs.go:282] 0 containers: []
	W1212 00:25:18.195042   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:18.195047   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:18.195107   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:18.219741   54101 cri.go:89] found id: ""
	I1212 00:25:18.219754   54101 logs.go:282] 0 containers: []
	W1212 00:25:18.219762   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:18.219767   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:18.219836   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:18.244023   54101 cri.go:89] found id: ""
	I1212 00:25:18.244036   54101 logs.go:282] 0 containers: []
	W1212 00:25:18.244043   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:18.244048   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:18.244105   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:18.268830   54101 cri.go:89] found id: ""
	I1212 00:25:18.268844   54101 logs.go:282] 0 containers: []
	W1212 00:25:18.268852   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:18.268857   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:18.268920   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:18.308533   54101 cri.go:89] found id: ""
	I1212 00:25:18.308547   54101 logs.go:282] 0 containers: []
	W1212 00:25:18.308553   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:18.308558   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:18.308618   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:18.342407   54101 cri.go:89] found id: ""
	I1212 00:25:18.342420   54101 logs.go:282] 0 containers: []
	W1212 00:25:18.342426   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:18.342434   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:18.342444   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:18.411629   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:18.403777   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:18.404392   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:18.405943   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:18.406371   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:18.407842   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:18.403777   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:18.404392   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:18.405943   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:18.406371   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:18.407842   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:18.411640   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:18.411652   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:18.476356   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:18.476375   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:18.508597   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:18.508613   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:18.565071   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:18.565088   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:21.075765   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:21.087124   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:21.087190   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:21.116451   54101 cri.go:89] found id: ""
	I1212 00:25:21.116465   54101 logs.go:282] 0 containers: []
	W1212 00:25:21.116472   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:21.116477   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:21.116540   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:21.142594   54101 cri.go:89] found id: ""
	I1212 00:25:21.142607   54101 logs.go:282] 0 containers: []
	W1212 00:25:21.142615   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:21.142620   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:21.142678   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:21.167624   54101 cri.go:89] found id: ""
	I1212 00:25:21.167638   54101 logs.go:282] 0 containers: []
	W1212 00:25:21.167646   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:21.167651   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:21.167709   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:21.195907   54101 cri.go:89] found id: ""
	I1212 00:25:21.195921   54101 logs.go:282] 0 containers: []
	W1212 00:25:21.195927   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:21.195932   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:21.195987   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:21.220794   54101 cri.go:89] found id: ""
	I1212 00:25:21.220808   54101 logs.go:282] 0 containers: []
	W1212 00:25:21.220816   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:21.220821   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:21.220880   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:21.246438   54101 cri.go:89] found id: ""
	I1212 00:25:21.246451   54101 logs.go:282] 0 containers: []
	W1212 00:25:21.246462   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:21.246473   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:21.246531   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:21.271784   54101 cri.go:89] found id: ""
	I1212 00:25:21.271799   54101 logs.go:282] 0 containers: []
	W1212 00:25:21.271806   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:21.271814   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:21.271833   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:21.315787   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:21.315812   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:21.377319   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:21.377338   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:21.388870   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:21.388885   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:21.453883   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:21.444432   16587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:21.445344   16587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:21.446969   16587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:21.447534   16587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:21.449241   16587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:21.444432   16587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:21.445344   16587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:21.446969   16587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:21.447534   16587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:21.449241   16587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:21.453893   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:21.453904   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:24.019730   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:24.030732   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:24.030792   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:24.057384   54101 cri.go:89] found id: ""
	I1212 00:25:24.057397   54101 logs.go:282] 0 containers: []
	W1212 00:25:24.057404   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:24.057410   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:24.057467   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:24.087868   54101 cri.go:89] found id: ""
	I1212 00:25:24.087883   54101 logs.go:282] 0 containers: []
	W1212 00:25:24.087891   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:24.087896   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:24.087960   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:24.112813   54101 cri.go:89] found id: ""
	I1212 00:25:24.112827   54101 logs.go:282] 0 containers: []
	W1212 00:25:24.112835   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:24.112840   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:24.112900   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:24.141527   54101 cri.go:89] found id: ""
	I1212 00:25:24.141541   54101 logs.go:282] 0 containers: []
	W1212 00:25:24.141548   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:24.141553   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:24.141612   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:24.171422   54101 cri.go:89] found id: ""
	I1212 00:25:24.171436   54101 logs.go:282] 0 containers: []
	W1212 00:25:24.171444   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:24.171449   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:24.171506   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:24.196733   54101 cri.go:89] found id: ""
	I1212 00:25:24.196758   54101 logs.go:282] 0 containers: []
	W1212 00:25:24.196767   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:24.196772   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:24.196840   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:24.221142   54101 cri.go:89] found id: ""
	I1212 00:25:24.221163   54101 logs.go:282] 0 containers: []
	W1212 00:25:24.221170   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:24.221178   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:24.221188   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:24.280043   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:24.280061   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:24.294333   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:24.294347   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:24.376651   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:24.368398   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:24.368936   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:24.370665   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:24.371218   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:24.372749   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:24.368398   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:24.368936   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:24.370665   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:24.371218   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:24.372749   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:24.376660   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:24.376670   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:24.442437   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:24.442455   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:26.972180   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:26.982717   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:26.982778   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:27.016302   54101 cri.go:89] found id: ""
	I1212 00:25:27.016317   54101 logs.go:282] 0 containers: []
	W1212 00:25:27.016324   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:27.016329   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:27.016390   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:27.041562   54101 cri.go:89] found id: ""
	I1212 00:25:27.041576   54101 logs.go:282] 0 containers: []
	W1212 00:25:27.041583   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:27.041588   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:27.041647   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:27.067288   54101 cri.go:89] found id: ""
	I1212 00:25:27.067301   54101 logs.go:282] 0 containers: []
	W1212 00:25:27.067308   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:27.067313   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:27.067370   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:27.093958   54101 cri.go:89] found id: ""
	I1212 00:25:27.093978   54101 logs.go:282] 0 containers: []
	W1212 00:25:27.093985   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:27.093990   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:27.094046   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:27.119290   54101 cri.go:89] found id: ""
	I1212 00:25:27.119303   54101 logs.go:282] 0 containers: []
	W1212 00:25:27.119310   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:27.119321   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:27.119378   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:27.147433   54101 cri.go:89] found id: ""
	I1212 00:25:27.147446   54101 logs.go:282] 0 containers: []
	W1212 00:25:27.147452   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:27.147457   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:27.147513   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:27.172138   54101 cri.go:89] found id: ""
	I1212 00:25:27.172152   54101 logs.go:282] 0 containers: []
	W1212 00:25:27.172159   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:27.172167   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:27.172177   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:27.228777   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:27.228797   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:27.240006   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:27.240021   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:27.317423   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:27.308478   16785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:27.309592   16785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:27.311317   16785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:27.311656   16785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:27.313135   16785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:27.308478   16785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:27.309592   16785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:27.311317   16785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:27.311656   16785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:27.313135   16785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:27.317433   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:27.317444   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:27.386770   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:27.386790   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:29.918004   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:29.928163   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:29.928225   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:29.957041   54101 cri.go:89] found id: ""
	I1212 00:25:29.957055   54101 logs.go:282] 0 containers: []
	W1212 00:25:29.957062   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:29.957067   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:29.957124   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:29.982223   54101 cri.go:89] found id: ""
	I1212 00:25:29.982237   54101 logs.go:282] 0 containers: []
	W1212 00:25:29.982244   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:29.982249   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:29.982306   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:30.021601   54101 cri.go:89] found id: ""
	I1212 00:25:30.021616   54101 logs.go:282] 0 containers: []
	W1212 00:25:30.021625   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:30.021630   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:30.021707   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:30.065430   54101 cri.go:89] found id: ""
	I1212 00:25:30.065447   54101 logs.go:282] 0 containers: []
	W1212 00:25:30.065456   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:30.065462   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:30.065547   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:30.094609   54101 cri.go:89] found id: ""
	I1212 00:25:30.094623   54101 logs.go:282] 0 containers: []
	W1212 00:25:30.094630   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:30.094635   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:30.094695   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:30.122604   54101 cri.go:89] found id: ""
	I1212 00:25:30.122618   54101 logs.go:282] 0 containers: []
	W1212 00:25:30.122626   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:30.122631   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:30.122690   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:30.148645   54101 cri.go:89] found id: ""
	I1212 00:25:30.148659   54101 logs.go:282] 0 containers: []
	W1212 00:25:30.148667   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:30.148675   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:30.148685   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:30.206432   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:30.206452   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:30.218454   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:30.218469   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:30.284319   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:30.274262   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:30.275194   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:30.276848   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:30.277482   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:30.278689   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:30.274262   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:30.275194   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:30.276848   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:30.277482   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:30.278689   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:30.284328   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:30.284339   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:30.356346   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:30.356372   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:32.883437   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:32.893868   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:32.893927   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:32.917839   54101 cri.go:89] found id: ""
	I1212 00:25:32.917852   54101 logs.go:282] 0 containers: []
	W1212 00:25:32.917859   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:32.917865   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:32.917931   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:32.942885   54101 cri.go:89] found id: ""
	I1212 00:25:32.942899   54101 logs.go:282] 0 containers: []
	W1212 00:25:32.942906   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:32.942911   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:32.942974   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:32.968519   54101 cri.go:89] found id: ""
	I1212 00:25:32.968532   54101 logs.go:282] 0 containers: []
	W1212 00:25:32.968539   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:32.968544   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:32.968602   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:33.004343   54101 cri.go:89] found id: ""
	I1212 00:25:33.004357   54101 logs.go:282] 0 containers: []
	W1212 00:25:33.004365   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:33.004370   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:33.004440   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:33.033496   54101 cri.go:89] found id: ""
	I1212 00:25:33.033510   54101 logs.go:282] 0 containers: []
	W1212 00:25:33.033524   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:33.033530   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:33.033590   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:33.061868   54101 cri.go:89] found id: ""
	I1212 00:25:33.061890   54101 logs.go:282] 0 containers: []
	W1212 00:25:33.061898   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:33.061903   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:33.061969   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:33.088616   54101 cri.go:89] found id: ""
	I1212 00:25:33.088630   54101 logs.go:282] 0 containers: []
	W1212 00:25:33.088637   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:33.088645   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:33.088655   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:33.144882   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:33.144899   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:33.156391   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:33.156407   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:33.220404   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:33.211436   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:33.212144   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:33.214079   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:33.214925   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:33.216614   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:33.211436   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:33.212144   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:33.214079   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:33.214925   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:33.216614   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:33.220413   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:33.220424   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:33.291732   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:33.291751   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:35.829535   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:35.839412   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:35.839479   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:35.864609   54101 cri.go:89] found id: ""
	I1212 00:25:35.864629   54101 logs.go:282] 0 containers: []
	W1212 00:25:35.864639   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:35.864644   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:35.864705   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:35.888220   54101 cri.go:89] found id: ""
	I1212 00:25:35.888234   54101 logs.go:282] 0 containers: []
	W1212 00:25:35.888241   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:35.888245   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:35.888304   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:35.911726   54101 cri.go:89] found id: ""
	I1212 00:25:35.911739   54101 logs.go:282] 0 containers: []
	W1212 00:25:35.911746   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:35.911751   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:35.911812   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:35.937495   54101 cri.go:89] found id: ""
	I1212 00:25:35.937510   54101 logs.go:282] 0 containers: []
	W1212 00:25:35.937517   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:35.937522   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:35.937578   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:35.962276   54101 cri.go:89] found id: ""
	I1212 00:25:35.962290   54101 logs.go:282] 0 containers: []
	W1212 00:25:35.962296   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:35.962301   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:35.962360   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:35.985962   54101 cri.go:89] found id: ""
	I1212 00:25:35.985981   54101 logs.go:282] 0 containers: []
	W1212 00:25:35.985989   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:35.985994   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:35.986056   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:36.012853   54101 cri.go:89] found id: ""
	I1212 00:25:36.012867   54101 logs.go:282] 0 containers: []
	W1212 00:25:36.012875   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:36.012882   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:36.012895   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:36.069296   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:36.069315   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:36.080983   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:36.081000   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:36.149041   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:36.139864   17098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:36.140552   17098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:36.142366   17098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:36.143049   17098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:36.144891   17098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:36.139864   17098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:36.140552   17098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:36.142366   17098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:36.143049   17098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:36.144891   17098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:36.149053   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:36.149064   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:36.210509   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:36.210528   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:38.743061   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:38.752984   54101 kubeadm.go:602] duration metric: took 4m3.726857079s to restartPrimaryControlPlane
	W1212 00:25:38.753047   54101 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 00:25:38.753120   54101 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1212 00:25:39.158817   54101 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:25:39.172695   54101 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:25:39.181725   54101 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:25:39.181785   54101 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:25:39.189823   54101 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:25:39.189833   54101 kubeadm.go:158] found existing configuration files:
	
	I1212 00:25:39.189882   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 00:25:39.197507   54101 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 00:25:39.197568   54101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 00:25:39.206290   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 00:25:39.215918   54101 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 00:25:39.215979   54101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:25:39.224009   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 00:25:39.231677   54101 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 00:25:39.231744   54101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:25:39.239027   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 00:25:39.246759   54101 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 00:25:39.246820   54101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:25:39.254322   54101 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:25:39.294892   54101 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 00:25:39.294976   54101 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 00:25:39.369123   54101 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:25:39.369186   54101 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 00:25:39.369220   54101 kubeadm.go:319] OS: Linux
	I1212 00:25:39.369264   54101 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 00:25:39.369311   54101 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 00:25:39.369356   54101 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 00:25:39.369403   54101 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 00:25:39.369450   54101 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 00:25:39.369496   54101 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 00:25:39.369541   54101 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 00:25:39.369587   54101 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 00:25:39.369632   54101 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 00:25:39.438649   54101 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:25:39.438759   54101 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:25:39.438849   54101 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 00:25:39.447406   54101 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:25:39.452683   54101 out.go:252]   - Generating certificates and keys ...
	I1212 00:25:39.452767   54101 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 00:25:39.452831   54101 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 00:25:39.452906   54101 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 00:25:39.452965   54101 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 00:25:39.453033   54101 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 00:25:39.453085   54101 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 00:25:39.453148   54101 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 00:25:39.453208   54101 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 00:25:39.453281   54101 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 00:25:39.453353   54101 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 00:25:39.453389   54101 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 00:25:39.453445   54101 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:25:39.710711   54101 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:25:40.209307   54101 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 00:25:40.334299   54101 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:25:40.657582   54101 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:25:40.893171   54101 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:25:40.893926   54101 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:25:40.896489   54101 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:25:40.899767   54101 out.go:252]   - Booting up control plane ...
	I1212 00:25:40.899871   54101 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:25:40.899953   54101 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:25:40.900236   54101 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:25:40.921621   54101 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:25:40.921722   54101 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 00:25:40.928629   54101 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 00:25:40.928898   54101 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:25:40.928939   54101 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 00:25:41.061713   54101 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 00:25:41.061825   54101 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 00:29:41.062316   54101 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001026811s
	I1212 00:29:41.062606   54101 kubeadm.go:319] 
	I1212 00:29:41.062683   54101 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 00:29:41.062716   54101 kubeadm.go:319] 	- The kubelet is not running
	I1212 00:29:41.062821   54101 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 00:29:41.062826   54101 kubeadm.go:319] 
	I1212 00:29:41.062929   54101 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 00:29:41.062960   54101 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 00:29:41.063008   54101 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 00:29:41.063012   54101 kubeadm.go:319] 
	I1212 00:29:41.067208   54101 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 00:29:41.067622   54101 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 00:29:41.067731   54101 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:29:41.067994   54101 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 00:29:41.067998   54101 kubeadm.go:319] 
	I1212 00:29:41.068065   54101 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1212 00:29:41.068164   54101 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001026811s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 00:29:41.068252   54101 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1212 00:29:41.482759   54101 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:29:41.496287   54101 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:29:41.496351   54101 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:29:41.504378   54101 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:29:41.504387   54101 kubeadm.go:158] found existing configuration files:
	
	I1212 00:29:41.504442   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 00:29:41.512585   54101 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 00:29:41.512640   54101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 00:29:41.520530   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 00:29:41.528262   54101 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 00:29:41.528318   54101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:29:41.536111   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 00:29:41.543998   54101 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 00:29:41.544056   54101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:29:41.551686   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 00:29:41.559774   54101 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 00:29:41.559831   54101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:29:41.567115   54101 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:29:41.604105   54101 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 00:29:41.604156   54101 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 00:29:41.681810   54101 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:29:41.681880   54101 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 00:29:41.681919   54101 kubeadm.go:319] OS: Linux
	I1212 00:29:41.681969   54101 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 00:29:41.682023   54101 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 00:29:41.682069   54101 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 00:29:41.682134   54101 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 00:29:41.682195   54101 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 00:29:41.682256   54101 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 00:29:41.682310   54101 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 00:29:41.682358   54101 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 00:29:41.682410   54101 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 00:29:41.751743   54101 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:29:41.751870   54101 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:29:41.751978   54101 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 00:29:41.757399   54101 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:29:41.762811   54101 out.go:252]   - Generating certificates and keys ...
	I1212 00:29:41.762902   54101 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 00:29:41.762969   54101 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 00:29:41.763059   54101 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 00:29:41.763119   54101 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 00:29:41.763187   54101 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 00:29:41.763239   54101 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 00:29:41.763301   54101 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 00:29:41.763361   54101 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 00:29:41.763434   54101 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 00:29:41.763505   54101 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 00:29:41.763542   54101 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 00:29:41.763596   54101 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:29:42.025181   54101 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:29:42.229266   54101 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 00:29:42.409579   54101 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:29:42.479383   54101 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:29:43.146782   54101 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:29:43.147428   54101 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:29:43.150122   54101 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:29:43.153470   54101 out.go:252]   - Booting up control plane ...
	I1212 00:29:43.153571   54101 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:29:43.153647   54101 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:29:43.153712   54101 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:29:43.174954   54101 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:29:43.175084   54101 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 00:29:43.182722   54101 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 00:29:43.183334   54101 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:29:43.183511   54101 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 00:29:43.327482   54101 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 00:29:43.327594   54101 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 00:33:43.326577   54101 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001134626s
	I1212 00:33:43.326601   54101 kubeadm.go:319] 
	I1212 00:33:43.326657   54101 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 00:33:43.326688   54101 kubeadm.go:319] 	- The kubelet is not running
	I1212 00:33:43.326791   54101 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 00:33:43.326796   54101 kubeadm.go:319] 
	I1212 00:33:43.326899   54101 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 00:33:43.326930   54101 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 00:33:43.326959   54101 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 00:33:43.326962   54101 kubeadm.go:319] 
	I1212 00:33:43.331146   54101 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 00:33:43.331567   54101 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 00:33:43.331673   54101 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:33:43.331909   54101 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 00:33:43.331913   54101 kubeadm.go:319] 
	I1212 00:33:43.331980   54101 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 00:33:43.332070   54101 kubeadm.go:403] duration metric: took 12m8.353678295s to StartCluster
	I1212 00:33:43.332098   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:33:43.332159   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:33:43.356905   54101 cri.go:89] found id: ""
	I1212 00:33:43.356919   54101 logs.go:282] 0 containers: []
	W1212 00:33:43.356925   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:33:43.356930   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:33:43.356985   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:33:43.381448   54101 cri.go:89] found id: ""
	I1212 00:33:43.381464   54101 logs.go:282] 0 containers: []
	W1212 00:33:43.381471   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:33:43.381477   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:33:43.381541   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:33:43.409467   54101 cri.go:89] found id: ""
	I1212 00:33:43.409480   54101 logs.go:282] 0 containers: []
	W1212 00:33:43.409487   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:33:43.409492   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:33:43.409550   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:33:43.434352   54101 cri.go:89] found id: ""
	I1212 00:33:43.434367   54101 logs.go:282] 0 containers: []
	W1212 00:33:43.434375   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:33:43.434381   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:33:43.434439   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:33:43.458566   54101 cri.go:89] found id: ""
	I1212 00:33:43.458581   54101 logs.go:282] 0 containers: []
	W1212 00:33:43.458588   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:33:43.458593   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:33:43.458661   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:33:43.482646   54101 cri.go:89] found id: ""
	I1212 00:33:43.482660   54101 logs.go:282] 0 containers: []
	W1212 00:33:43.482667   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:33:43.482672   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:33:43.482728   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:33:43.507433   54101 cri.go:89] found id: ""
	I1212 00:33:43.507445   54101 logs.go:282] 0 containers: []
	W1212 00:33:43.507452   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:33:43.507461   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:33:43.507472   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:33:43.575281   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:33:43.567196   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:33:43.568177   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:33:43.569762   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:33:43.570292   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:33:43.571460   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:33:43.567196   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:33:43.568177   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:33:43.569762   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:33:43.570292   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:33:43.571460   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:33:43.575296   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:33:43.575305   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:33:43.637567   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:33:43.637585   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:33:43.665505   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:33:43.665520   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:33:43.723897   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:33:43.723913   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1212 00:33:43.734646   54101 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001134626s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 00:33:43.734686   54101 out.go:285] * 
	W1212 00:33:43.734800   54101 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001134626s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 00:33:43.734860   54101 out.go:285] * 
	W1212 00:33:43.737311   54101 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:33:43.743292   54101 out.go:203] 
	W1212 00:33:43.746156   54101 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001134626s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 00:33:43.746395   54101 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 00:33:43.746473   54101 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 00:33:43.751052   54101 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.272455867Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.272520269Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.272625542Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.272714248Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.272776665Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.272836596Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.272893384Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.272958435Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.273027211Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.273124763Z" level=info msg="Connect containerd service"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.273469529Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.274122072Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.287381427Z" level=info msg="Start subscribing containerd event"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.287554622Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.287703153Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.287625047Z" level=info msg="Start recovering state"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.327211013Z" level=info msg="Start event monitor"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.327399462Z" level=info msg="Start cni network conf syncer for default"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.327470929Z" level=info msg="Start streaming server"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.327536341Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.327597642Z" level=info msg="runtime interface starting up..."
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.327652682Z" level=info msg="starting plugins..."
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.327716215Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.327919745Z" level=info msg="containerd successfully booted in 0.080422s"
	Dec 12 00:21:33 functional-767012 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:33:47.396249   21148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:33:47.396756   21148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:33:47.398263   21148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:33:47.398682   21148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:33:47.400190   21148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec11 23:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014465] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.504479] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.038126] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.726220] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.947343] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 00:33:47 up  1:16,  0 user,  load average: 0.02, 0.14, 0.33
	Linux functional-767012 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 00:33:44 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:33:44 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 12 00:33:44 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:33:44 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:33:44 functional-767012 kubelet[20983]: E1212 00:33:44.839218   20983 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:33:44 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:33:44 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:33:45 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 12 00:33:45 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:33:45 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:33:45 functional-767012 kubelet[21022]: E1212 00:33:45.600386   21022 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:33:45 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:33:45 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:33:46 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 12 00:33:46 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:33:46 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:33:46 functional-767012 kubelet[21042]: E1212 00:33:46.351286   21042 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:33:46 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:33:46 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:33:47 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 325.
	Dec 12 00:33:47 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:33:47 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:33:47 functional-767012 kubelet[21065]: E1212 00:33:47.081405   21065 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:33:47 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:33:47 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-767012 -n functional-767012
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-767012 -n functional-767012: exit status 2 (340.067647ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-767012" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-767012 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-767012 apply -f testdata/invalidsvc.yaml: exit status 1 (57.736856ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-767012 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-767012 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-767012 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-767012 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-767012 --alsologtostderr -v=1] stderr:
I1212 00:35:48.685663   71435 out.go:360] Setting OutFile to fd 1 ...
I1212 00:35:48.685807   71435 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:35:48.685824   71435 out.go:374] Setting ErrFile to fd 2...
I1212 00:35:48.685838   71435 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:35:48.686088   71435 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
I1212 00:35:48.686355   71435 mustload.go:66] Loading cluster: functional-767012
I1212 00:35:48.686794   71435 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1212 00:35:48.687318   71435 cli_runner.go:164] Run: docker container inspect functional-767012 --format={{.State.Status}}
I1212 00:35:48.705075   71435 host.go:66] Checking if "functional-767012" exists ...
I1212 00:35:48.705431   71435 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1212 00:35:48.765268   71435 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 00:35:48.75551195 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1212 00:35:48.765372   71435 api_server.go:166] Checking apiserver status ...
I1212 00:35:48.765427   71435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1212 00:35:48.765463   71435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
I1212 00:35:48.781854   71435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
W1212 00:35:48.888506   71435 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1212 00:35:48.891899   71435 out.go:179] * The control-plane node functional-767012 apiserver is not running: (state=Stopped)
I1212 00:35:48.894815   71435 out.go:179]   To start a cluster, run: "minikube start -p functional-767012"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-767012
helpers_test.go:244: (dbg) docker inspect functional-767012:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e",
	        "Created": "2025-12-12T00:06:52.261765556Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42951,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T00:06:52.317917194Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/hostname",
	        "HostsPath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/hosts",
	        "LogPath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e-json.log",
	        "Name": "/functional-767012",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-767012:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-767012",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e",
	                "LowerDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70-init/diff:/var/lib/docker/overlay2/4c0e5370e4fd7b4e6c6a79620ef377d7d55826709cd277e0cfa49c6005af0314/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-767012",
	                "Source": "/var/lib/docker/volumes/functional-767012/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-767012",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-767012",
	                "name.minikube.sigs.k8s.io": "functional-767012",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e781257da3adf1d3284ab2a6de0168c3db7957f25a7e53d0015250294193762d",
	            "SandboxKey": "/var/run/docker/netns/e781257da3ad",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-767012": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:4d:78:ba:7d:83",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "83467cc4cb13818b98ec0d7cb5fc0064ea6eb2c8db4256a8a81330921aa2d9a4",
	                    "EndpointID": "b787b732d8d748776ceeb6e65fab51cc1e79758446bc85ac20043b35593fab12",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-767012",
	                        "6585a82fe5e6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-767012 -n functional-767012
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-767012 -n functional-767012: exit status 2 (341.214192ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service   │ functional-767012 service hello-node --url                                                                                                          │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ mount     │ -p functional-767012 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2804369017/001:/mount-9p --alsologtostderr -v=1              │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ ssh       │ functional-767012 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ ssh       │ functional-767012 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ ssh       │ functional-767012 ssh -- ls -la /mount-9p                                                                                                           │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ ssh       │ functional-767012 ssh cat /mount-9p/test-1765499738560712358                                                                                        │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ ssh       │ functional-767012 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ ssh       │ functional-767012 ssh sudo umount -f /mount-9p                                                                                                      │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ mount     │ -p functional-767012 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3942718542/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ ssh       │ functional-767012 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ ssh       │ functional-767012 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ ssh       │ functional-767012 ssh -- ls -la /mount-9p                                                                                                           │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ ssh       │ functional-767012 ssh sudo umount -f /mount-9p                                                                                                      │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ mount     │ -p functional-767012 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo322877521/001:/mount1 --alsologtostderr -v=1                 │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ mount     │ -p functional-767012 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo322877521/001:/mount3 --alsologtostderr -v=1                 │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ mount     │ -p functional-767012 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo322877521/001:/mount2 --alsologtostderr -v=1                 │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ ssh       │ functional-767012 ssh findmnt -T /mount1                                                                                                            │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ ssh       │ functional-767012 ssh findmnt -T /mount1                                                                                                            │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ ssh       │ functional-767012 ssh findmnt -T /mount2                                                                                                            │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ ssh       │ functional-767012 ssh findmnt -T /mount3                                                                                                            │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ mount     │ -p functional-767012 --kill=true                                                                                                                    │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ start     │ -p functional-767012 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ start     │ -p functional-767012 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ start     │ -p functional-767012 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0           │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-767012 --alsologtostderr -v=1                                                                                      │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:35:48
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:35:48.421297   71358 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:35:48.421486   71358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:35:48.421517   71358 out.go:374] Setting ErrFile to fd 2...
	I1212 00:35:48.421538   71358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:35:48.421819   71358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	I1212 00:35:48.422212   71358 out.go:368] Setting JSON to false
	I1212 00:35:48.423061   71358 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4695,"bootTime":1765495054,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1212 00:35:48.423162   71358 start.go:143] virtualization:  
	I1212 00:35:48.426514   71358 out.go:179] * [functional-767012] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 00:35:48.429578   71358 notify.go:221] Checking for updates...
	I1212 00:35:48.430099   71358 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:35:48.433220   71358 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:35:48.436246   71358 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 00:35:48.439180   71358 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	I1212 00:35:48.441913   71358 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:35:48.444801   71358 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:35:48.448078   71358 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 00:35:48.448720   71358 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:35:48.471395   71358 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 00:35:48.471520   71358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:35:48.541609   71358 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 00:35:48.526256938 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:35:48.541956   71358 docker.go:319] overlay module found
	I1212 00:35:48.547082   71358 out.go:179] * Using the docker driver based on existing profile
	I1212 00:35:48.549944   71358 start.go:309] selected driver: docker
	I1212 00:35:48.549967   71358 start.go:927] validating driver "docker" against &{Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:35:48.550052   71358 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:35:48.550151   71358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:35:48.627972   71358 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 00:35:48.618983237 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:35:48.628394   71358 cni.go:84] Creating CNI manager for ""
	I1212 00:35:48.628445   71358 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 00:35:48.628480   71358 start.go:353] cluster config:
	{Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:35:48.633520   71358 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.272455867Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.272520269Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.272625542Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.272714248Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.272776665Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.272836596Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.272893384Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.272958435Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.273027211Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.273124763Z" level=info msg="Connect containerd service"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.273469529Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.274122072Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.287381427Z" level=info msg="Start subscribing containerd event"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.287554622Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.287703153Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.287625047Z" level=info msg="Start recovering state"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.327211013Z" level=info msg="Start event monitor"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.327399462Z" level=info msg="Start cni network conf syncer for default"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.327470929Z" level=info msg="Start streaming server"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.327536341Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.327597642Z" level=info msg="runtime interface starting up..."
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.327652682Z" level=info msg="starting plugins..."
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.327716215Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.327919745Z" level=info msg="containerd successfully booted in 0.080422s"
	Dec 12 00:21:33 functional-767012 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:35:49.937004   23205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:35:49.937409   23205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:35:49.938830   23205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:35:49.939172   23205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:35:49.940578   23205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec11 23:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014465] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.504479] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.038126] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.726220] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.947343] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 00:35:49 up  1:18,  0 user,  load average: 0.45, 0.25, 0.35
	Linux functional-767012 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 00:35:46 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:35:47 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 485.
	Dec 12 00:35:47 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:47 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:47 functional-767012 kubelet[23055]: E1212 00:35:47.105892   23055 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:35:47 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:35:47 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:35:47 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 486.
	Dec 12 00:35:47 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:47 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:47 functional-767012 kubelet[23071]: E1212 00:35:47.835679   23071 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:35:47 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:35:47 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:35:48 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 487.
	Dec 12 00:35:48 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:48 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:48 functional-767012 kubelet[23090]: E1212 00:35:48.590589   23090 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:35:48 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:35:48 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:35:49 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 488.
	Dec 12 00:35:49 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:49 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:49 functional-767012 kubelet[23117]: E1212 00:35:49.326756   23117 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:35:49 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:35:49 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-767012 -n functional-767012
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-767012 -n functional-767012: exit status 2 (337.257727ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-767012" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-767012 status: exit status 2 (329.213313ms)

                                                
                                                
-- stdout --
	functional-767012
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-arm64 -p functional-767012 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-767012 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (356.067405ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Running,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-arm64 -p functional-767012 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-767012 status -o json: exit status 2 (312.359013ms)

                                                
                                                
-- stdout --
	{"Name":"functional-767012","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-arm64 -p functional-767012 status -o json" : exit status 2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-767012
helpers_test.go:244: (dbg) docker inspect functional-767012:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e",
	        "Created": "2025-12-12T00:06:52.261765556Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42951,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T00:06:52.317917194Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/hostname",
	        "HostsPath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/hosts",
	        "LogPath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e-json.log",
	        "Name": "/functional-767012",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-767012:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-767012",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e",
	                "LowerDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70-init/diff:/var/lib/docker/overlay2/4c0e5370e4fd7b4e6c6a79620ef377d7d55826709cd277e0cfa49c6005af0314/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-767012",
	                "Source": "/var/lib/docker/volumes/functional-767012/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-767012",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-767012",
	                "name.minikube.sigs.k8s.io": "functional-767012",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e781257da3adf1d3284ab2a6de0168c3db7957f25a7e53d0015250294193762d",
	            "SandboxKey": "/var/run/docker/netns/e781257da3ad",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-767012": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:4d:78:ba:7d:83",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "83467cc4cb13818b98ec0d7cb5fc0064ea6eb2c8db4256a8a81330921aa2d9a4",
	                    "EndpointID": "b787b732d8d748776ceeb6e65fab51cc1e79758446bc85ac20043b35593fab12",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-767012",
	                        "6585a82fe5e6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-767012 -n functional-767012
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-767012 -n functional-767012: exit status 2 (337.120069ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service │ functional-767012 service list                                                                                                                      │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ service │ functional-767012 service list -o json                                                                                                              │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ service │ functional-767012 service --namespace=default --https --url hello-node                                                                              │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ service │ functional-767012 service hello-node --url --format={{.IP}}                                                                                         │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ service │ functional-767012 service hello-node --url                                                                                                          │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ mount   │ -p functional-767012 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2804369017/001:/mount-9p --alsologtostderr -v=1              │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ ssh     │ functional-767012 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ ssh     │ functional-767012 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ ssh     │ functional-767012 ssh -- ls -la /mount-9p                                                                                                           │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ ssh     │ functional-767012 ssh cat /mount-9p/test-1765499738560712358                                                                                        │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ ssh     │ functional-767012 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ ssh     │ functional-767012 ssh sudo umount -f /mount-9p                                                                                                      │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ mount   │ -p functional-767012 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3942718542/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ ssh     │ functional-767012 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ ssh     │ functional-767012 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ ssh     │ functional-767012 ssh -- ls -la /mount-9p                                                                                                           │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ ssh     │ functional-767012 ssh sudo umount -f /mount-9p                                                                                                      │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ mount   │ -p functional-767012 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo322877521/001:/mount1 --alsologtostderr -v=1                 │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ mount   │ -p functional-767012 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo322877521/001:/mount3 --alsologtostderr -v=1                 │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ mount   │ -p functional-767012 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo322877521/001:/mount2 --alsologtostderr -v=1                 │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ ssh     │ functional-767012 ssh findmnt -T /mount1                                                                                                            │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ ssh     │ functional-767012 ssh findmnt -T /mount1                                                                                                            │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ ssh     │ functional-767012 ssh findmnt -T /mount2                                                                                                            │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ ssh     │ functional-767012 ssh findmnt -T /mount3                                                                                                            │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ mount   │ -p functional-767012 --kill=true                                                                                                                    │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:21:30
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:21:30.554245   54101 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:21:30.554345   54101 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:21:30.554348   54101 out.go:374] Setting ErrFile to fd 2...
	I1212 00:21:30.554353   54101 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:21:30.554677   54101 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	I1212 00:21:30.555164   54101 out.go:368] Setting JSON to false
	I1212 00:21:30.555965   54101 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3837,"bootTime":1765495054,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1212 00:21:30.556051   54101 start.go:143] virtualization:  
	I1212 00:21:30.559689   54101 out.go:179] * [functional-767012] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 00:21:30.562867   54101 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:21:30.562960   54101 notify.go:221] Checking for updates...
	I1212 00:21:30.566618   54101 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:21:30.569772   54101 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 00:21:30.572750   54101 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	I1212 00:21:30.576169   54101 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:21:30.579060   54101 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:21:30.582404   54101 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 00:21:30.582492   54101 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:21:30.621591   54101 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 00:21:30.621756   54101 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:21:30.683145   54101 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-12 00:21:30.674181767 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:21:30.683240   54101 docker.go:319] overlay module found
	I1212 00:21:30.688118   54101 out.go:179] * Using the docker driver based on existing profile
	I1212 00:21:30.690961   54101 start.go:309] selected driver: docker
	I1212 00:21:30.690971   54101 start.go:927] validating driver "docker" against &{Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:21:30.691125   54101 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:21:30.691237   54101 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:21:30.747846   54101 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-12 00:21:30.73816398 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:21:30.748230   54101 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:21:30.748252   54101 cni.go:84] Creating CNI manager for ""
	I1212 00:21:30.748298   54101 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 00:21:30.748340   54101 start.go:353] cluster config:
	{Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:21:30.751463   54101 out.go:179] * Starting "functional-767012" primary control-plane node in "functional-767012" cluster
	I1212 00:21:30.754231   54101 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1212 00:21:30.757160   54101 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 00:21:30.760119   54101 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 00:21:30.760160   54101 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1212 00:21:30.760168   54101 cache.go:65] Caching tarball of preloaded images
	I1212 00:21:30.760193   54101 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 00:21:30.760258   54101 preload.go:238] Found /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1212 00:21:30.760267   54101 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1212 00:21:30.760383   54101 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/config.json ...
	I1212 00:21:30.778906   54101 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 00:21:30.778917   54101 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 00:21:30.778938   54101 cache.go:243] Successfully downloaded all kic artifacts
	I1212 00:21:30.778968   54101 start.go:360] acquireMachinesLock for functional-767012: {Name:mk41cf89e93a3830367886ebbef2bb8f6e99e3f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:21:30.779070   54101 start.go:364] duration metric: took 80.115µs to acquireMachinesLock for "functional-767012"
	I1212 00:21:30.779088   54101 start.go:96] Skipping create...Using existing machine configuration
	I1212 00:21:30.779093   54101 fix.go:54] fixHost starting: 
	I1212 00:21:30.779346   54101 cli_runner.go:164] Run: docker container inspect functional-767012 --format={{.State.Status}}
	I1212 00:21:30.795901   54101 fix.go:112] recreateIfNeeded on functional-767012: state=Running err=<nil>
	W1212 00:21:30.795920   54101 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 00:21:30.799043   54101 out.go:252] * Updating the running docker "functional-767012" container ...
	I1212 00:21:30.799064   54101 machine.go:94] provisionDockerMachine start ...
	I1212 00:21:30.799139   54101 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:21:30.816214   54101 main.go:143] libmachine: Using SSH client type: native
	I1212 00:21:30.816539   54101 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 00:21:30.816545   54101 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 00:21:30.966929   54101 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-767012
	
	I1212 00:21:30.966943   54101 ubuntu.go:182] provisioning hostname "functional-767012"
	I1212 00:21:30.967026   54101 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:21:30.983921   54101 main.go:143] libmachine: Using SSH client type: native
	I1212 00:21:30.984212   54101 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 00:21:30.984220   54101 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-767012 && echo "functional-767012" | sudo tee /etc/hostname
	I1212 00:21:31.148238   54101 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-767012
	
	I1212 00:21:31.148339   54101 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:21:31.167090   54101 main.go:143] libmachine: Using SSH client type: native
	I1212 00:21:31.167393   54101 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 00:21:31.167407   54101 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-767012' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-767012/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-767012' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:21:31.315620   54101 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:21:31.315644   54101 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-2343/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-2343/.minikube}
	I1212 00:21:31.315665   54101 ubuntu.go:190] setting up certificates
	I1212 00:21:31.315680   54101 provision.go:84] configureAuth start
	I1212 00:21:31.315738   54101 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-767012
	I1212 00:21:31.348126   54101 provision.go:143] copyHostCerts
	I1212 00:21:31.348184   54101 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem, removing ...
	I1212 00:21:31.348191   54101 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem
	I1212 00:21:31.348265   54101 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem (1675 bytes)
	I1212 00:21:31.348353   54101 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem, removing ...
	I1212 00:21:31.348357   54101 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem
	I1212 00:21:31.348380   54101 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem (1082 bytes)
	I1212 00:21:31.348433   54101 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem, removing ...
	I1212 00:21:31.348436   54101 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem
	I1212 00:21:31.348457   54101 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem (1123 bytes)
	I1212 00:21:31.348500   54101 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem org=jenkins.functional-767012 san=[127.0.0.1 192.168.49.2 functional-767012 localhost minikube]
	I1212 00:21:31.571131   54101 provision.go:177] copyRemoteCerts
	I1212 00:21:31.571185   54101 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:21:31.571226   54101 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:21:31.588332   54101 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:21:31.690410   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 00:21:31.707240   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 00:21:31.724075   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:21:31.740524   54101 provision.go:87] duration metric: took 424.823605ms to configureAuth
	I1212 00:21:31.740541   54101 ubuntu.go:206] setting minikube options for container-runtime
	I1212 00:21:31.740761   54101 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 00:21:31.740771   54101 machine.go:97] duration metric: took 941.698571ms to provisionDockerMachine
	I1212 00:21:31.740778   54101 start.go:293] postStartSetup for "functional-767012" (driver="docker")
	I1212 00:21:31.740788   54101 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:21:31.740838   54101 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:21:31.740873   54101 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:21:31.758388   54101 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:21:31.866987   54101 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:21:31.870573   54101 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:21:31.870591   54101 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 00:21:31.870603   54101 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/addons for local assets ...
	I1212 00:21:31.870659   54101 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/files for local assets ...
	I1212 00:21:31.870732   54101 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem -> 42902.pem in /etc/ssl/certs
	I1212 00:21:31.870809   54101 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/test/nested/copy/4290/hosts -> hosts in /etc/test/nested/copy/4290
	I1212 00:21:31.870853   54101 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4290
	I1212 00:21:31.878221   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /etc/ssl/certs/42902.pem (1708 bytes)
	I1212 00:21:31.898601   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/test/nested/copy/4290/hosts --> /etc/test/nested/copy/4290/hosts (40 bytes)
	I1212 00:21:31.917863   54101 start.go:296] duration metric: took 177.070825ms for postStartSetup
	I1212 00:21:31.917948   54101 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:21:31.917994   54101 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:21:31.934865   54101 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:21:32.037797   54101 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:21:32.044535   54101 fix.go:56] duration metric: took 1.265435742s for fixHost
	I1212 00:21:32.044551   54101 start.go:83] releasing machines lock for "functional-767012", held for 1.265473363s
	I1212 00:21:32.044634   54101 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-767012
	I1212 00:21:32.063486   54101 ssh_runner.go:195] Run: cat /version.json
	I1212 00:21:32.063525   54101 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:21:32.063754   54101 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:21:32.063796   54101 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:21:32.082463   54101 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:21:32.110313   54101 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:21:32.198490   54101 ssh_runner.go:195] Run: systemctl --version
	I1212 00:21:32.295700   54101 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:21:32.300162   54101 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:21:32.300220   54101 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:21:32.308110   54101 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 00:21:32.308123   54101 start.go:496] detecting cgroup driver to use...
	I1212 00:21:32.308152   54101 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 00:21:32.308196   54101 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 00:21:32.324857   54101 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 00:21:32.337980   54101 docker.go:218] disabling cri-docker service (if available) ...
	I1212 00:21:32.338034   54101 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:21:32.353838   54101 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:21:32.367832   54101 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:21:32.501329   54101 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:21:32.628856   54101 docker.go:234] disabling docker service ...
	I1212 00:21:32.628933   54101 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:21:32.643664   54101 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:21:32.657070   54101 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:21:32.773509   54101 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:21:32.920829   54101 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:21:32.933710   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:21:32.947319   54101 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 00:21:32.956944   54101 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 00:21:32.966825   54101 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 00:21:32.966891   54101 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 00:21:32.976378   54101 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 00:21:32.985341   54101 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 00:21:32.995459   54101 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 00:21:33.011573   54101 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:21:33.020559   54101 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 00:21:33.029747   54101 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 00:21:33.038731   54101 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 00:21:33.048050   54101 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:21:33.056172   54101 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:21:33.063953   54101 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:21:33.190754   54101 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 00:21:33.330744   54101 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1212 00:21:33.330802   54101 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1212 00:21:33.334307   54101 start.go:564] Will wait 60s for crictl version
	I1212 00:21:33.334373   54101 ssh_runner.go:195] Run: which crictl
	I1212 00:21:33.337855   54101 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 00:21:33.361388   54101 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1212 00:21:33.361444   54101 ssh_runner.go:195] Run: containerd --version
	I1212 00:21:33.383087   54101 ssh_runner.go:195] Run: containerd --version
	I1212 00:21:33.409485   54101 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1212 00:21:33.412580   54101 cli_runner.go:164] Run: docker network inspect functional-767012 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:21:33.429552   54101 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 00:21:33.436766   54101 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1212 00:21:33.439631   54101 kubeadm.go:884] updating cluster {Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:21:33.439814   54101 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 00:21:33.439917   54101 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:21:33.465266   54101 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 00:21:33.465277   54101 containerd.go:534] Images already preloaded, skipping extraction
	I1212 00:21:33.465345   54101 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:21:33.495685   54101 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 00:21:33.495696   54101 cache_images.go:86] Images are preloaded, skipping loading
	I1212 00:21:33.495703   54101 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1212 00:21:33.495800   54101 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-767012 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:21:33.495863   54101 ssh_runner.go:195] Run: sudo crictl info
	I1212 00:21:33.520655   54101 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1212 00:21:33.520679   54101 cni.go:84] Creating CNI manager for ""
	I1212 00:21:33.520688   54101 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 00:21:33.520701   54101 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 00:21:33.520721   54101 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-767012 NodeName:functional-767012 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:21:33.520840   54101 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-767012"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:21:33.520909   54101 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 00:21:33.528771   54101 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 00:21:33.528832   54101 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:21:33.537845   54101 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1212 00:21:33.552578   54101 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 00:21:33.567275   54101 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1212 00:21:33.581608   54101 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:21:33.586017   54101 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:21:33.720787   54101 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:21:34.285938   54101 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012 for IP: 192.168.49.2
	I1212 00:21:34.285949   54101 certs.go:195] generating shared ca certs ...
	I1212 00:21:34.285964   54101 certs.go:227] acquiring lock for ca certs: {Name:mk18ed2fce74cbc4ee01c0f71e2dbdd98ccce1cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:21:34.286114   54101 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key
	I1212 00:21:34.286160   54101 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key
	I1212 00:21:34.286167   54101 certs.go:257] generating profile certs ...
	I1212 00:21:34.286262   54101 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.key
	I1212 00:21:34.286326   54101 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.key.fcbff5a4
	I1212 00:21:34.286371   54101 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.key
	I1212 00:21:34.286484   54101 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem (1338 bytes)
	W1212 00:21:34.286514   54101 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290_empty.pem, impossibly tiny 0 bytes
	I1212 00:21:34.286521   54101 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 00:21:34.286547   54101 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem (1082 bytes)
	I1212 00:21:34.286569   54101 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:21:34.286590   54101 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem (1675 bytes)
	I1212 00:21:34.286633   54101 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem (1708 bytes)
	I1212 00:21:34.287348   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:21:34.308553   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 00:21:34.331894   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:21:34.355464   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:21:34.374443   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 00:21:34.393434   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 00:21:34.411599   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:21:34.429619   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:21:34.447321   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /usr/share/ca-certificates/42902.pem (1708 bytes)
	I1212 00:21:34.464997   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:21:34.482627   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem --> /usr/share/ca-certificates/4290.pem (1338 bytes)
	I1212 00:21:34.500926   54101 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:21:34.513622   54101 ssh_runner.go:195] Run: openssl version
	I1212 00:21:34.519764   54101 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4290.pem
	I1212 00:21:34.527069   54101 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4290.pem /etc/ssl/certs/4290.pem
	I1212 00:21:34.534472   54101 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4290.pem
	I1212 00:21:34.538121   54101 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:06 /usr/share/ca-certificates/4290.pem
	I1212 00:21:34.538179   54101 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4290.pem
	I1212 00:21:34.579437   54101 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 00:21:34.586891   54101 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42902.pem
	I1212 00:21:34.594262   54101 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42902.pem /etc/ssl/certs/42902.pem
	I1212 00:21:34.601868   54101 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42902.pem
	I1212 00:21:34.605501   54101 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:06 /usr/share/ca-certificates/42902.pem
	I1212 00:21:34.605557   54101 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42902.pem
	I1212 00:21:34.646393   54101 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 00:21:34.653807   54101 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:21:34.661225   54101 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:21:34.668768   54101 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:21:34.672511   54101 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:21:34.672567   54101 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:21:34.713655   54101 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:21:34.721031   54101 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:21:34.724786   54101 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 00:21:34.765815   54101 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 00:21:34.806690   54101 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 00:21:34.847558   54101 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 00:21:34.888576   54101 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 00:21:34.933434   54101 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 00:21:34.978399   54101 kubeadm.go:401] StartCluster: {Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:21:34.978479   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1212 00:21:34.978543   54101 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:21:35.017576   54101 cri.go:89] found id: ""
	I1212 00:21:35.017638   54101 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:21:35.026096   54101 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 00:21:35.026118   54101 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 00:21:35.026171   54101 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 00:21:35.034785   54101 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:21:35.035314   54101 kubeconfig.go:125] found "functional-767012" server: "https://192.168.49.2:8441"
	I1212 00:21:35.036573   54101 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 00:21:35.046414   54101 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-12 00:07:00.613095536 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-12 00:21:33.576611675 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1212 00:21:35.046427   54101 kubeadm.go:1161] stopping kube-system containers ...
	I1212 00:21:35.046437   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1212 00:21:35.046492   54101 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:21:35.082797   54101 cri.go:89] found id: ""
	I1212 00:21:35.082857   54101 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 00:21:35.102877   54101 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:21:35.111403   54101 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 12 00:11 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 12 00:11 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 12 00:11 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 12 00:11 /etc/kubernetes/scheduler.conf
	
	I1212 00:21:35.111465   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 00:21:35.120302   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 00:21:35.128075   54101 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:21:35.128131   54101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:21:35.135780   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 00:21:35.143743   54101 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:21:35.143796   54101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:21:35.151555   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 00:21:35.159766   54101 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:21:35.159823   54101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:21:35.167617   54101 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:21:35.175675   54101 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:21:35.223997   54101 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:21:36.520500   54101 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.296478898s)
	I1212 00:21:36.520559   54101 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:21:36.729554   54101 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:21:36.788511   54101 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:21:36.835897   54101 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:21:36.835964   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:37.336817   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:37.836795   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:38.336842   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:38.836903   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:39.336145   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:39.836069   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:40.336948   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:40.837012   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:41.336101   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:41.836925   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:42.336725   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:42.836125   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:43.336921   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:43.836180   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:44.336837   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:44.836956   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:45.336777   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:45.836993   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:46.336836   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:46.836176   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:47.336095   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:47.836055   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:48.336741   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:48.836121   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:49.336917   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:49.836413   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:50.336092   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:50.836150   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:51.337033   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:51.836957   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:52.336084   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:52.836739   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:53.336118   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:53.836933   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:54.336879   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:54.836792   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:55.336817   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:55.836920   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:56.336115   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:56.836712   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:57.336349   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:57.836961   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:58.336641   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:58.836512   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:59.336849   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:59.836072   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:00.336133   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:00.836802   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:01.336983   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:01.836131   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:02.336195   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:02.836806   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:03.336993   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:03.837131   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:04.336098   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:04.836118   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:05.336315   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:05.837043   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:06.336091   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:06.836161   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:07.336123   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:07.836157   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:08.336214   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:08.836176   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:09.336152   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:09.836160   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:10.336024   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:10.836954   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:11.337041   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:11.836824   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:12.336075   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:12.836181   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:13.336397   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:13.836099   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:14.336156   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:14.836195   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:15.336313   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:15.836956   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:16.336943   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:16.836999   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:17.336149   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:17.836085   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:18.336339   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:18.836154   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:19.336945   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:19.836761   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:20.336721   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:20.837012   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:21.336764   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:21.836154   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:22.336203   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:22.836095   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:23.336255   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:23.836950   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:24.336879   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:24.836852   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:25.336786   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:25.836079   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:26.336767   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:26.836193   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:27.336157   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:27.836115   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:28.336182   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:28.836772   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:29.336188   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:29.836047   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:30.336792   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:30.836649   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:31.337030   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:31.836180   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:32.336198   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:32.837057   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:33.336991   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:33.836801   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:34.336920   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:34.836119   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:35.337050   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:35.836716   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:36.336423   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:36.836018   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:22:36.836096   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:22:36.862427   54101 cri.go:89] found id: ""
	I1212 00:22:36.862441   54101 logs.go:282] 0 containers: []
	W1212 00:22:36.862448   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:22:36.862453   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:22:36.862517   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:22:36.892149   54101 cri.go:89] found id: ""
	I1212 00:22:36.892163   54101 logs.go:282] 0 containers: []
	W1212 00:22:36.892169   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:22:36.892175   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:22:36.892234   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:22:36.916655   54101 cri.go:89] found id: ""
	I1212 00:22:36.916670   54101 logs.go:282] 0 containers: []
	W1212 00:22:36.916677   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:22:36.916681   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:22:36.916753   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:22:36.945533   54101 cri.go:89] found id: ""
	I1212 00:22:36.945546   54101 logs.go:282] 0 containers: []
	W1212 00:22:36.945554   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:22:36.945559   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:22:36.945616   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:22:36.970456   54101 cri.go:89] found id: ""
	I1212 00:22:36.970469   54101 logs.go:282] 0 containers: []
	W1212 00:22:36.970477   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:22:36.970482   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:22:36.970556   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:22:36.997550   54101 cri.go:89] found id: ""
	I1212 00:22:36.997568   54101 logs.go:282] 0 containers: []
	W1212 00:22:36.997577   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:22:36.997582   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:22:36.997656   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:22:37.043296   54101 cri.go:89] found id: ""
	I1212 00:22:37.043319   54101 logs.go:282] 0 containers: []
	W1212 00:22:37.043326   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:22:37.043334   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:22:37.043344   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:22:37.115314   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:22:37.115335   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:22:37.126489   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:22:37.126505   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:22:37.191880   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:22:37.183564   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:37.183995   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:37.185555   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:37.185892   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:37.187528   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:22:37.183564   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:37.183995   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:37.185555   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:37.185892   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:37.187528   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:22:37.191890   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:22:37.191900   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:22:37.253331   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:22:37.253349   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:22:39.783593   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:39.793972   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:22:39.794055   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:22:39.822155   54101 cri.go:89] found id: ""
	I1212 00:22:39.822169   54101 logs.go:282] 0 containers: []
	W1212 00:22:39.822176   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:22:39.822181   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:22:39.822250   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:22:39.847125   54101 cri.go:89] found id: ""
	I1212 00:22:39.847138   54101 logs.go:282] 0 containers: []
	W1212 00:22:39.847145   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:22:39.847150   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:22:39.847210   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:22:39.872050   54101 cri.go:89] found id: ""
	I1212 00:22:39.872064   54101 logs.go:282] 0 containers: []
	W1212 00:22:39.872072   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:22:39.872077   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:22:39.872143   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:22:39.896579   54101 cri.go:89] found id: ""
	I1212 00:22:39.896592   54101 logs.go:282] 0 containers: []
	W1212 00:22:39.896599   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:22:39.896606   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:22:39.896664   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:22:39.921505   54101 cri.go:89] found id: ""
	I1212 00:22:39.921520   54101 logs.go:282] 0 containers: []
	W1212 00:22:39.921537   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:22:39.921543   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:22:39.921602   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:22:39.949647   54101 cri.go:89] found id: ""
	I1212 00:22:39.949660   54101 logs.go:282] 0 containers: []
	W1212 00:22:39.949667   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:22:39.949672   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:22:39.949739   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:22:39.972863   54101 cri.go:89] found id: ""
	I1212 00:22:39.972877   54101 logs.go:282] 0 containers: []
	W1212 00:22:39.972886   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:22:39.972894   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:22:39.972904   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:22:39.983379   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:22:39.983394   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:22:40.083583   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:22:40.071923   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:40.073365   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:40.075724   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:40.076148   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:40.078746   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:22:40.071923   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:40.073365   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:40.075724   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:40.076148   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:40.078746   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:22:40.083593   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:22:40.083604   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:22:40.153645   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:22:40.153664   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:22:40.181452   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:22:40.181471   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:22:42.742128   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:42.752298   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:22:42.752357   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:22:42.777204   54101 cri.go:89] found id: ""
	I1212 00:22:42.777218   54101 logs.go:282] 0 containers: []
	W1212 00:22:42.777225   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:22:42.777236   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:22:42.777295   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:22:42.801649   54101 cri.go:89] found id: ""
	I1212 00:22:42.801663   54101 logs.go:282] 0 containers: []
	W1212 00:22:42.801670   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:22:42.801675   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:22:42.801731   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:22:42.826035   54101 cri.go:89] found id: ""
	I1212 00:22:42.826048   54101 logs.go:282] 0 containers: []
	W1212 00:22:42.826055   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:22:42.826059   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:22:42.826131   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:22:42.853290   54101 cri.go:89] found id: ""
	I1212 00:22:42.853303   54101 logs.go:282] 0 containers: []
	W1212 00:22:42.853310   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:22:42.853316   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:22:42.853372   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:22:42.880012   54101 cri.go:89] found id: ""
	I1212 00:22:42.880025   54101 logs.go:282] 0 containers: []
	W1212 00:22:42.880033   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:22:42.880037   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:22:42.880097   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:22:42.909253   54101 cri.go:89] found id: ""
	I1212 00:22:42.909267   54101 logs.go:282] 0 containers: []
	W1212 00:22:42.909274   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:22:42.909279   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:22:42.909335   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:22:42.936731   54101 cri.go:89] found id: ""
	I1212 00:22:42.936745   54101 logs.go:282] 0 containers: []
	W1212 00:22:42.936756   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:22:42.936764   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:22:42.936782   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:22:42.991768   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:22:42.991787   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:22:43.005267   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:22:43.005283   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:22:43.089221   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:22:43.080335   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:43.081099   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:43.082720   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:43.083301   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:43.084856   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:22:43.080335   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:43.081099   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:43.082720   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:43.083301   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:43.084856   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:22:43.089233   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:22:43.089244   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:22:43.153170   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:22:43.153191   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:22:45.684515   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:45.696038   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:22:45.696106   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:22:45.721408   54101 cri.go:89] found id: ""
	I1212 00:22:45.721422   54101 logs.go:282] 0 containers: []
	W1212 00:22:45.721439   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:22:45.721446   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:22:45.721518   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:22:45.746760   54101 cri.go:89] found id: ""
	I1212 00:22:45.746774   54101 logs.go:282] 0 containers: []
	W1212 00:22:45.746781   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:22:45.746794   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:22:45.746852   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:22:45.784086   54101 cri.go:89] found id: ""
	I1212 00:22:45.784100   54101 logs.go:282] 0 containers: []
	W1212 00:22:45.784107   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:22:45.784113   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:22:45.784196   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:22:45.809513   54101 cri.go:89] found id: ""
	I1212 00:22:45.809527   54101 logs.go:282] 0 containers: []
	W1212 00:22:45.809534   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:22:45.809547   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:22:45.809603   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:22:45.833922   54101 cri.go:89] found id: ""
	I1212 00:22:45.833935   54101 logs.go:282] 0 containers: []
	W1212 00:22:45.833943   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:22:45.833957   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:22:45.834020   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:22:45.858716   54101 cri.go:89] found id: ""
	I1212 00:22:45.858738   54101 logs.go:282] 0 containers: []
	W1212 00:22:45.858745   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:22:45.858751   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:22:45.858819   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:22:45.886125   54101 cri.go:89] found id: ""
	I1212 00:22:45.886140   54101 logs.go:282] 0 containers: []
	W1212 00:22:45.886161   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:22:45.886170   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:22:45.886181   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:22:45.913706   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:22:45.913723   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:22:45.972155   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:22:45.972173   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:22:45.982756   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:22:45.982771   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:22:46.057549   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:22:46.048888   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:46.049652   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:46.050838   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:46.051562   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:46.053189   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:22:46.048888   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:46.049652   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:46.050838   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:46.051562   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:46.053189   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:22:46.057568   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:22:46.057589   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:22:48.631952   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:48.641871   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:22:48.641945   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:22:48.667026   54101 cri.go:89] found id: ""
	I1212 00:22:48.667040   54101 logs.go:282] 0 containers: []
	W1212 00:22:48.667047   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:22:48.667052   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:22:48.667111   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:22:48.694393   54101 cri.go:89] found id: ""
	I1212 00:22:48.694407   54101 logs.go:282] 0 containers: []
	W1212 00:22:48.694414   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:22:48.694419   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:22:48.694479   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:22:48.723393   54101 cri.go:89] found id: ""
	I1212 00:22:48.723406   54101 logs.go:282] 0 containers: []
	W1212 00:22:48.723413   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:22:48.723418   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:22:48.723480   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:22:48.749414   54101 cri.go:89] found id: ""
	I1212 00:22:48.749427   54101 logs.go:282] 0 containers: []
	W1212 00:22:48.749434   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:22:48.749440   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:22:48.749500   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:22:48.773494   54101 cri.go:89] found id: ""
	I1212 00:22:48.773508   54101 logs.go:282] 0 containers: []
	W1212 00:22:48.773514   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:22:48.773520   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:22:48.773584   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:22:48.798476   54101 cri.go:89] found id: ""
	I1212 00:22:48.798490   54101 logs.go:282] 0 containers: []
	W1212 00:22:48.798497   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:22:48.798502   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:22:48.798570   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:22:48.823097   54101 cri.go:89] found id: ""
	I1212 00:22:48.823112   54101 logs.go:282] 0 containers: []
	W1212 00:22:48.823119   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:22:48.823127   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:22:48.823136   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:22:48.884369   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:22:48.884390   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:22:48.918017   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:22:48.918032   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:22:48.974636   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:22:48.974656   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:22:48.985524   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:22:48.985540   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:22:49.075379   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:22:49.063866   11166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:49.064550   11166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:49.067284   11166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:49.067979   11166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:49.070881   11166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:22:49.063866   11166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:49.064550   11166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:49.067284   11166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:49.067979   11166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:49.070881   11166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:22:51.575612   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:51.585822   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:22:51.585880   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:22:51.611290   54101 cri.go:89] found id: ""
	I1212 00:22:51.611304   54101 logs.go:282] 0 containers: []
	W1212 00:22:51.611311   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:22:51.611317   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:22:51.611376   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:22:51.638852   54101 cri.go:89] found id: ""
	I1212 00:22:51.638868   54101 logs.go:282] 0 containers: []
	W1212 00:22:51.638875   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:22:51.638882   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:22:51.638941   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:22:51.663831   54101 cri.go:89] found id: ""
	I1212 00:22:51.663845   54101 logs.go:282] 0 containers: []
	W1212 00:22:51.663852   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:22:51.663857   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:22:51.663914   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:22:51.689264   54101 cri.go:89] found id: ""
	I1212 00:22:51.689278   54101 logs.go:282] 0 containers: []
	W1212 00:22:51.689286   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:22:51.689291   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:22:51.689350   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:22:51.714774   54101 cri.go:89] found id: ""
	I1212 00:22:51.714788   54101 logs.go:282] 0 containers: []
	W1212 00:22:51.714795   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:22:51.714800   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:22:51.714889   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:22:51.739800   54101 cri.go:89] found id: ""
	I1212 00:22:51.739814   54101 logs.go:282] 0 containers: []
	W1212 00:22:51.739822   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:22:51.739827   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:22:51.739885   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:22:51.767107   54101 cri.go:89] found id: ""
	I1212 00:22:51.767134   54101 logs.go:282] 0 containers: []
	W1212 00:22:51.767142   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:22:51.767150   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:22:51.767160   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:22:51.821534   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:22:51.821552   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:22:51.832147   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:22:51.832161   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:22:51.897869   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:22:51.890100   11256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:51.890663   11256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:51.892157   11256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:51.892582   11256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:51.894067   11256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:22:51.890100   11256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:51.890663   11256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:51.892157   11256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:51.892582   11256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:51.894067   11256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:22:51.897889   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:22:51.897899   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:22:51.958502   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:22:51.958519   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:22:54.487348   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:54.497592   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:22:54.497655   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:22:54.524765   54101 cri.go:89] found id: ""
	I1212 00:22:54.524779   54101 logs.go:282] 0 containers: []
	W1212 00:22:54.524787   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:22:54.524800   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:22:54.524860   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:22:54.549685   54101 cri.go:89] found id: ""
	I1212 00:22:54.549699   54101 logs.go:282] 0 containers: []
	W1212 00:22:54.549706   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:22:54.549710   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:22:54.549766   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:22:54.573523   54101 cri.go:89] found id: ""
	I1212 00:22:54.573537   54101 logs.go:282] 0 containers: []
	W1212 00:22:54.573544   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:22:54.573549   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:22:54.573607   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:22:54.602326   54101 cri.go:89] found id: ""
	I1212 00:22:54.602342   54101 logs.go:282] 0 containers: []
	W1212 00:22:54.602349   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:22:54.602354   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:22:54.602411   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:22:54.626746   54101 cri.go:89] found id: ""
	I1212 00:22:54.626777   54101 logs.go:282] 0 containers: []
	W1212 00:22:54.626784   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:22:54.626792   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:22:54.626860   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:22:54.652678   54101 cri.go:89] found id: ""
	I1212 00:22:54.652693   54101 logs.go:282] 0 containers: []
	W1212 00:22:54.652715   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:22:54.652720   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:22:54.652789   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:22:54.677588   54101 cri.go:89] found id: ""
	I1212 00:22:54.677602   54101 logs.go:282] 0 containers: []
	W1212 00:22:54.677609   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:22:54.677617   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:22:54.677627   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:22:54.733727   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:22:54.733750   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:22:54.744434   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:22:54.744450   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:22:54.810290   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:22:54.802232   11361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:54.802635   11361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:54.804258   11361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:54.804924   11361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:54.806440   11361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:22:54.802232   11361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:54.802635   11361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:54.804258   11361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:54.804924   11361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:54.806440   11361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:22:54.810301   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:22:54.810311   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:22:54.869777   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:22:54.869794   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:22:57.396960   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:57.406761   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:22:57.406819   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:22:57.431202   54101 cri.go:89] found id: ""
	I1212 00:22:57.431216   54101 logs.go:282] 0 containers: []
	W1212 00:22:57.431223   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:22:57.431228   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:22:57.431285   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:22:57.456103   54101 cri.go:89] found id: ""
	I1212 00:22:57.456116   54101 logs.go:282] 0 containers: []
	W1212 00:22:57.456123   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:22:57.456129   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:22:57.456185   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:22:57.482677   54101 cri.go:89] found id: ""
	I1212 00:22:57.482690   54101 logs.go:282] 0 containers: []
	W1212 00:22:57.482697   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:22:57.482703   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:22:57.482776   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:22:57.507899   54101 cri.go:89] found id: ""
	I1212 00:22:57.507912   54101 logs.go:282] 0 containers: []
	W1212 00:22:57.507919   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:22:57.507925   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:22:57.507986   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:22:57.536079   54101 cri.go:89] found id: ""
	I1212 00:22:57.536093   54101 logs.go:282] 0 containers: []
	W1212 00:22:57.536101   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:22:57.536106   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:22:57.536167   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:22:57.564822   54101 cri.go:89] found id: ""
	I1212 00:22:57.564836   54101 logs.go:282] 0 containers: []
	W1212 00:22:57.564843   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:22:57.564857   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:22:57.564923   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:22:57.589921   54101 cri.go:89] found id: ""
	I1212 00:22:57.589935   54101 logs.go:282] 0 containers: []
	W1212 00:22:57.589943   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:22:57.589951   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:22:57.589961   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:22:57.648534   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:22:57.648552   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:22:57.659464   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:22:57.659481   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:22:57.727477   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:22:57.718925   11464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:57.719812   11464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:57.721551   11464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:57.722035   11464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:57.723542   11464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:22:57.718925   11464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:57.719812   11464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:57.721551   11464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:57.722035   11464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:57.723542   11464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:22:57.727497   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:22:57.727508   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:22:57.791545   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:22:57.791567   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:00.319474   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:00.337512   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:00.337596   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:00.386008   54101 cri.go:89] found id: ""
	I1212 00:23:00.386034   54101 logs.go:282] 0 containers: []
	W1212 00:23:00.386042   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:00.386048   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:00.386118   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:00.435933   54101 cri.go:89] found id: ""
	I1212 00:23:00.435948   54101 logs.go:282] 0 containers: []
	W1212 00:23:00.435961   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:00.435966   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:00.436033   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:00.464332   54101 cri.go:89] found id: ""
	I1212 00:23:00.464347   54101 logs.go:282] 0 containers: []
	W1212 00:23:00.464354   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:00.464360   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:00.464438   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:00.492272   54101 cri.go:89] found id: ""
	I1212 00:23:00.492288   54101 logs.go:282] 0 containers: []
	W1212 00:23:00.492296   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:00.492308   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:00.492399   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:00.523157   54101 cri.go:89] found id: ""
	I1212 00:23:00.523172   54101 logs.go:282] 0 containers: []
	W1212 00:23:00.523180   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:00.523185   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:00.523251   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:00.551205   54101 cri.go:89] found id: ""
	I1212 00:23:00.551219   54101 logs.go:282] 0 containers: []
	W1212 00:23:00.551227   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:00.551232   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:00.551303   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:00.581595   54101 cri.go:89] found id: ""
	I1212 00:23:00.581609   54101 logs.go:282] 0 containers: []
	W1212 00:23:00.581616   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:00.581624   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:00.581637   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:00.638838   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:00.638857   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:00.650126   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:00.650141   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:00.717921   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:00.707574   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:00.709178   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:00.709927   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:00.711724   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:00.712419   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:00.707574   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:00.709178   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:00.709927   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:00.711724   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:00.712419   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:00.717933   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:00.717947   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:00.780105   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:00.780123   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:03.311322   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:03.323283   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:03.323344   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:03.361266   54101 cri.go:89] found id: ""
	I1212 00:23:03.361281   54101 logs.go:282] 0 containers: []
	W1212 00:23:03.361288   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:03.361293   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:03.361353   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:03.386333   54101 cri.go:89] found id: ""
	I1212 00:23:03.386347   54101 logs.go:282] 0 containers: []
	W1212 00:23:03.386353   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:03.386363   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:03.386421   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:03.413227   54101 cri.go:89] found id: ""
	I1212 00:23:03.413241   54101 logs.go:282] 0 containers: []
	W1212 00:23:03.413248   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:03.413253   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:03.413310   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:03.437970   54101 cri.go:89] found id: ""
	I1212 00:23:03.437991   54101 logs.go:282] 0 containers: []
	W1212 00:23:03.437999   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:03.438004   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:03.438060   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:03.466477   54101 cri.go:89] found id: ""
	I1212 00:23:03.466491   54101 logs.go:282] 0 containers: []
	W1212 00:23:03.466499   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:03.466504   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:03.466561   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:03.491808   54101 cri.go:89] found id: ""
	I1212 00:23:03.491821   54101 logs.go:282] 0 containers: []
	W1212 00:23:03.491828   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:03.491834   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:03.491890   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:03.517149   54101 cri.go:89] found id: ""
	I1212 00:23:03.517163   54101 logs.go:282] 0 containers: []
	W1212 00:23:03.517170   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:03.517177   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:03.517187   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:03.572746   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:03.572773   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:03.584001   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:03.584018   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:03.656247   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:03.647626   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:03.648470   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:03.650161   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:03.650723   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:03.652396   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:03.647626   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:03.648470   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:03.650161   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:03.650723   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:03.652396   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:03.656257   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:03.656268   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:03.722945   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:03.722971   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:06.251078   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:06.261552   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:06.261613   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:06.289582   54101 cri.go:89] found id: ""
	I1212 00:23:06.289597   54101 logs.go:282] 0 containers: []
	W1212 00:23:06.289605   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:06.289610   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:06.289673   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:06.317842   54101 cri.go:89] found id: ""
	I1212 00:23:06.317855   54101 logs.go:282] 0 containers: []
	W1212 00:23:06.317863   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:06.317868   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:06.317926   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:06.352672   54101 cri.go:89] found id: ""
	I1212 00:23:06.352685   54101 logs.go:282] 0 containers: []
	W1212 00:23:06.352692   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:06.352697   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:06.352752   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:06.382465   54101 cri.go:89] found id: ""
	I1212 00:23:06.382479   54101 logs.go:282] 0 containers: []
	W1212 00:23:06.382486   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:06.382491   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:06.382549   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:06.409293   54101 cri.go:89] found id: ""
	I1212 00:23:06.409307   54101 logs.go:282] 0 containers: []
	W1212 00:23:06.409325   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:06.409351   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:06.409419   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:06.437827   54101 cri.go:89] found id: ""
	I1212 00:23:06.437842   54101 logs.go:282] 0 containers: []
	W1212 00:23:06.437850   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:06.437855   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:06.437916   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:06.461631   54101 cri.go:89] found id: ""
	I1212 00:23:06.461645   54101 logs.go:282] 0 containers: []
	W1212 00:23:06.461652   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:06.461660   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:06.461672   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:06.524818   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:06.524837   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:06.555647   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:06.555663   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:06.613018   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:06.613037   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:06.623988   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:06.624004   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:06.689835   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:06.681072   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:06.681903   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:06.683626   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:06.684195   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:06.685841   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:06.681072   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:06.681903   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:06.683626   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:06.684195   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:06.685841   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:09.190077   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:09.199951   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:09.200011   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:09.224598   54101 cri.go:89] found id: ""
	I1212 00:23:09.224612   54101 logs.go:282] 0 containers: []
	W1212 00:23:09.224619   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:09.224624   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:09.224680   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:09.249246   54101 cri.go:89] found id: ""
	I1212 00:23:09.249259   54101 logs.go:282] 0 containers: []
	W1212 00:23:09.249266   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:09.249270   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:09.249326   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:09.276466   54101 cri.go:89] found id: ""
	I1212 00:23:09.276481   54101 logs.go:282] 0 containers: []
	W1212 00:23:09.276488   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:09.276493   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:09.276569   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:09.305292   54101 cri.go:89] found id: ""
	I1212 00:23:09.305306   54101 logs.go:282] 0 containers: []
	W1212 00:23:09.305320   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:09.305325   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:09.305385   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:09.340249   54101 cri.go:89] found id: ""
	I1212 00:23:09.340263   54101 logs.go:282] 0 containers: []
	W1212 00:23:09.340269   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:09.340274   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:09.340335   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:09.371473   54101 cri.go:89] found id: ""
	I1212 00:23:09.371487   54101 logs.go:282] 0 containers: []
	W1212 00:23:09.371494   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:09.371499   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:09.371560   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:09.396595   54101 cri.go:89] found id: ""
	I1212 00:23:09.396611   54101 logs.go:282] 0 containers: []
	W1212 00:23:09.396618   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:09.396626   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:09.396639   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:09.455271   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:09.455288   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:09.465948   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:09.465963   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:09.533532   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:09.524698   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:09.525522   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:09.527378   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:09.527995   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:09.529577   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:09.524698   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:09.525522   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:09.527378   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:09.527995   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:09.529577   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:09.533544   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:09.533554   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:09.595751   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:09.595769   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:12.124276   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:12.134222   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:12.134281   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:12.158363   54101 cri.go:89] found id: ""
	I1212 00:23:12.158377   54101 logs.go:282] 0 containers: []
	W1212 00:23:12.158384   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:12.158390   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:12.158446   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:12.181913   54101 cri.go:89] found id: ""
	I1212 00:23:12.181930   54101 logs.go:282] 0 containers: []
	W1212 00:23:12.181936   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:12.181941   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:12.181997   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:12.206035   54101 cri.go:89] found id: ""
	I1212 00:23:12.206048   54101 logs.go:282] 0 containers: []
	W1212 00:23:12.206055   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:12.206060   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:12.206119   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:12.234593   54101 cri.go:89] found id: ""
	I1212 00:23:12.234606   54101 logs.go:282] 0 containers: []
	W1212 00:23:12.234614   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:12.234618   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:12.234675   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:12.258839   54101 cri.go:89] found id: ""
	I1212 00:23:12.258853   54101 logs.go:282] 0 containers: []
	W1212 00:23:12.258867   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:12.258873   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:12.258931   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:12.295188   54101 cri.go:89] found id: ""
	I1212 00:23:12.295202   54101 logs.go:282] 0 containers: []
	W1212 00:23:12.295219   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:12.295225   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:12.295295   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:12.331819   54101 cri.go:89] found id: ""
	I1212 00:23:12.331833   54101 logs.go:282] 0 containers: []
	W1212 00:23:12.331851   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:12.331859   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:12.331869   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:12.392019   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:12.392036   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:12.402367   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:12.402383   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:12.463715   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:12.455582   11990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:12.455962   11990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:12.457659   11990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:12.458359   11990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:12.459974   11990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:12.455582   11990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:12.455962   11990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:12.457659   11990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:12.458359   11990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:12.459974   11990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:12.463724   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:12.463745   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:12.528182   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:12.528200   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:15.057258   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:15.068358   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:15.068421   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:15.094774   54101 cri.go:89] found id: ""
	I1212 00:23:15.094787   54101 logs.go:282] 0 containers: []
	W1212 00:23:15.094804   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:15.094812   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:15.094882   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:15.120167   54101 cri.go:89] found id: ""
	I1212 00:23:15.120180   54101 logs.go:282] 0 containers: []
	W1212 00:23:15.120188   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:15.120193   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:15.120249   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:15.150855   54101 cri.go:89] found id: ""
	I1212 00:23:15.150868   54101 logs.go:282] 0 containers: []
	W1212 00:23:15.150886   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:15.150891   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:15.150958   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:15.179684   54101 cri.go:89] found id: ""
	I1212 00:23:15.179697   54101 logs.go:282] 0 containers: []
	W1212 00:23:15.179704   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:15.179709   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:15.179784   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:15.204315   54101 cri.go:89] found id: ""
	I1212 00:23:15.204338   54101 logs.go:282] 0 containers: []
	W1212 00:23:15.204345   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:15.204350   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:15.204425   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:15.229074   54101 cri.go:89] found id: ""
	I1212 00:23:15.229088   54101 logs.go:282] 0 containers: []
	W1212 00:23:15.229095   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:15.229103   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:15.229168   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:15.253510   54101 cri.go:89] found id: ""
	I1212 00:23:15.253532   54101 logs.go:282] 0 containers: []
	W1212 00:23:15.253540   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:15.253548   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:15.253559   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:15.264299   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:15.264317   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:15.346071   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:15.332347   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:15.334627   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:15.335427   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:15.337189   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:15.337763   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:15.332347   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:15.334627   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:15.335427   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:15.337189   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:15.337763   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:15.346082   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:15.346092   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:15.414287   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:15.414306   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:15.440115   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:15.440130   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:17.999409   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:18.010537   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:18.010603   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:18.036961   54101 cri.go:89] found id: ""
	I1212 00:23:18.036975   54101 logs.go:282] 0 containers: []
	W1212 00:23:18.036982   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:18.036988   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:18.037047   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:18.065553   54101 cri.go:89] found id: ""
	I1212 00:23:18.065568   54101 logs.go:282] 0 containers: []
	W1212 00:23:18.065575   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:18.065582   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:18.065643   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:18.090902   54101 cri.go:89] found id: ""
	I1212 00:23:18.090916   54101 logs.go:282] 0 containers: []
	W1212 00:23:18.090923   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:18.090927   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:18.090987   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:18.120598   54101 cri.go:89] found id: ""
	I1212 00:23:18.120611   54101 logs.go:282] 0 containers: []
	W1212 00:23:18.120618   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:18.120623   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:18.120686   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:18.147780   54101 cri.go:89] found id: ""
	I1212 00:23:18.147794   54101 logs.go:282] 0 containers: []
	W1212 00:23:18.147801   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:18.147806   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:18.147863   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:18.176272   54101 cri.go:89] found id: ""
	I1212 00:23:18.176286   54101 logs.go:282] 0 containers: []
	W1212 00:23:18.176293   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:18.176306   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:18.176368   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:18.201024   54101 cri.go:89] found id: ""
	I1212 00:23:18.201037   54101 logs.go:282] 0 containers: []
	W1212 00:23:18.201045   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:18.201052   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:18.201062   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:18.211552   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:18.211566   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:18.274135   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:18.266305   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:18.266699   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:18.268383   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:18.268854   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:18.270264   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:18.266305   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:18.266699   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:18.268383   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:18.268854   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:18.270264   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:18.274145   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:18.274155   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:18.339516   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:18.339534   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:18.369221   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:18.369236   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:20.928503   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:20.938705   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:20.938771   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:20.966429   54101 cri.go:89] found id: ""
	I1212 00:23:20.966442   54101 logs.go:282] 0 containers: []
	W1212 00:23:20.966449   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:20.966463   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:20.966521   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:20.993659   54101 cri.go:89] found id: ""
	I1212 00:23:20.993674   54101 logs.go:282] 0 containers: []
	W1212 00:23:20.993694   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:20.993700   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:20.993783   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:21.021877   54101 cri.go:89] found id: ""
	I1212 00:23:21.021894   54101 logs.go:282] 0 containers: []
	W1212 00:23:21.021901   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:21.021907   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:21.021974   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:21.050301   54101 cri.go:89] found id: ""
	I1212 00:23:21.050315   54101 logs.go:282] 0 containers: []
	W1212 00:23:21.050333   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:21.050338   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:21.050394   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:21.074369   54101 cri.go:89] found id: ""
	I1212 00:23:21.074382   54101 logs.go:282] 0 containers: []
	W1212 00:23:21.074399   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:21.074404   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:21.074459   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:21.100847   54101 cri.go:89] found id: ""
	I1212 00:23:21.100860   54101 logs.go:282] 0 containers: []
	W1212 00:23:21.100867   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:21.100872   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:21.100930   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:21.129915   54101 cri.go:89] found id: ""
	I1212 00:23:21.129928   54101 logs.go:282] 0 containers: []
	W1212 00:23:21.129950   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:21.129958   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:21.129967   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:21.186387   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:21.186407   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:21.197421   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:21.197437   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:21.261078   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:21.252661   12293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:21.253431   12293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:21.255174   12293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:21.255799   12293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:21.257304   12293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:21.252661   12293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:21.253431   12293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:21.255174   12293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:21.255799   12293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:21.257304   12293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:21.261090   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:21.261104   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:21.326885   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:21.326903   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:23.859105   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:23.869083   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:23.869143   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:23.892667   54101 cri.go:89] found id: ""
	I1212 00:23:23.892681   54101 logs.go:282] 0 containers: []
	W1212 00:23:23.892688   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:23.892693   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:23.892755   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:23.916368   54101 cri.go:89] found id: ""
	I1212 00:23:23.916381   54101 logs.go:282] 0 containers: []
	W1212 00:23:23.916388   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:23.916393   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:23.916456   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:23.953674   54101 cri.go:89] found id: ""
	I1212 00:23:23.953688   54101 logs.go:282] 0 containers: []
	W1212 00:23:23.953695   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:23.953700   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:23.953755   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:23.977280   54101 cri.go:89] found id: ""
	I1212 00:23:23.977293   54101 logs.go:282] 0 containers: []
	W1212 00:23:23.977300   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:23.977305   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:23.977364   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:24.002961   54101 cri.go:89] found id: ""
	I1212 00:23:24.002985   54101 logs.go:282] 0 containers: []
	W1212 00:23:24.003014   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:24.003020   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:24.003098   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:24.034368   54101 cri.go:89] found id: ""
	I1212 00:23:24.034382   54101 logs.go:282] 0 containers: []
	W1212 00:23:24.034393   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:24.034398   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:24.034470   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:24.065761   54101 cri.go:89] found id: ""
	I1212 00:23:24.065775   54101 logs.go:282] 0 containers: []
	W1212 00:23:24.065788   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:24.065796   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:24.065806   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:24.122870   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:24.122890   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:24.134384   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:24.134398   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:24.204008   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:24.196235   12399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:24.196812   12399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:24.198515   12399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:24.198869   12399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:24.200088   12399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:24.196235   12399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:24.196812   12399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:24.198515   12399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:24.198869   12399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:24.200088   12399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:24.204018   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:24.204029   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:24.268817   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:24.268835   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:26.805407   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:26.815561   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:26.815619   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:26.843361   54101 cri.go:89] found id: ""
	I1212 00:23:26.843375   54101 logs.go:282] 0 containers: []
	W1212 00:23:26.843382   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:26.843388   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:26.843447   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:26.867615   54101 cri.go:89] found id: ""
	I1212 00:23:26.867630   54101 logs.go:282] 0 containers: []
	W1212 00:23:26.867637   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:26.867642   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:26.867698   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:26.897089   54101 cri.go:89] found id: ""
	I1212 00:23:26.897102   54101 logs.go:282] 0 containers: []
	W1212 00:23:26.897109   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:26.897114   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:26.897173   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:26.920797   54101 cri.go:89] found id: ""
	I1212 00:23:26.920810   54101 logs.go:282] 0 containers: []
	W1212 00:23:26.920817   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:26.920822   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:26.920878   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:26.948949   54101 cri.go:89] found id: ""
	I1212 00:23:26.948963   54101 logs.go:282] 0 containers: []
	W1212 00:23:26.948970   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:26.948975   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:26.949034   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:26.972541   54101 cri.go:89] found id: ""
	I1212 00:23:26.972555   54101 logs.go:282] 0 containers: []
	W1212 00:23:26.972563   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:26.972568   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:26.972631   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:26.998049   54101 cri.go:89] found id: ""
	I1212 00:23:26.998065   54101 logs.go:282] 0 containers: []
	W1212 00:23:26.998073   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:26.998089   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:26.998102   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:27.027523   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:27.027538   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:27.085127   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:27.085146   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:27.096087   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:27.096101   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:27.162090   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:27.153308   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:27.154010   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:27.155943   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:27.156645   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:27.158348   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:27.153308   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:27.154010   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:27.155943   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:27.156645   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:27.158348   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:27.162101   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:27.162111   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:29.728366   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:29.738393   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:29.738452   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:29.764004   54101 cri.go:89] found id: ""
	I1212 00:23:29.764017   54101 logs.go:282] 0 containers: []
	W1212 00:23:29.764024   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:29.764029   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:29.764089   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:29.787843   54101 cri.go:89] found id: ""
	I1212 00:23:29.787857   54101 logs.go:282] 0 containers: []
	W1212 00:23:29.787874   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:29.787879   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:29.787936   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:29.812859   54101 cri.go:89] found id: ""
	I1212 00:23:29.812872   54101 logs.go:282] 0 containers: []
	W1212 00:23:29.812879   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:29.812884   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:29.812941   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:29.837580   54101 cri.go:89] found id: ""
	I1212 00:23:29.837593   54101 logs.go:282] 0 containers: []
	W1212 00:23:29.837600   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:29.837605   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:29.837673   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:29.861535   54101 cri.go:89] found id: ""
	I1212 00:23:29.861560   54101 logs.go:282] 0 containers: []
	W1212 00:23:29.861567   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:29.861572   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:29.861644   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:29.886533   54101 cri.go:89] found id: ""
	I1212 00:23:29.886546   54101 logs.go:282] 0 containers: []
	W1212 00:23:29.886553   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:29.886559   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:29.886624   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:29.913577   54101 cri.go:89] found id: ""
	I1212 00:23:29.913604   54101 logs.go:282] 0 containers: []
	W1212 00:23:29.913611   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:29.913619   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:29.913630   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:29.940660   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:29.940675   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:29.995286   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:29.995307   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:30.029235   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:30.029252   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:30.103143   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:30.093717   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:30.094664   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:30.096287   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:30.096764   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:30.098381   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:30.093717   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:30.094664   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:30.096287   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:30.096764   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:30.098381   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:30.103157   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:30.103168   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:32.666081   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:32.676000   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:32.676071   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:32.701112   54101 cri.go:89] found id: ""
	I1212 00:23:32.701125   54101 logs.go:282] 0 containers: []
	W1212 00:23:32.701133   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:32.701138   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:32.701195   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:32.727727   54101 cri.go:89] found id: ""
	I1212 00:23:32.727741   54101 logs.go:282] 0 containers: []
	W1212 00:23:32.727748   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:32.727753   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:32.727810   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:32.756561   54101 cri.go:89] found id: ""
	I1212 00:23:32.756574   54101 logs.go:282] 0 containers: []
	W1212 00:23:32.756581   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:32.756586   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:32.756648   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:32.781745   54101 cri.go:89] found id: ""
	I1212 00:23:32.781758   54101 logs.go:282] 0 containers: []
	W1212 00:23:32.781765   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:32.781771   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:32.781830   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:32.807544   54101 cri.go:89] found id: ""
	I1212 00:23:32.807558   54101 logs.go:282] 0 containers: []
	W1212 00:23:32.807571   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:32.807576   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:32.807634   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:32.837232   54101 cri.go:89] found id: ""
	I1212 00:23:32.837246   54101 logs.go:282] 0 containers: []
	W1212 00:23:32.837253   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:32.837259   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:32.837321   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:32.864631   54101 cri.go:89] found id: ""
	I1212 00:23:32.864645   54101 logs.go:282] 0 containers: []
	W1212 00:23:32.864660   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:32.864667   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:32.864678   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:32.927240   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:32.919009   12700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:32.919629   12700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:32.921337   12700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:32.921842   12700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:32.923382   12700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:32.919009   12700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:32.919629   12700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:32.921337   12700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:32.921842   12700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:32.923382   12700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:32.927249   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:32.927276   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:32.990198   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:32.990226   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:33.020370   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:33.020389   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:33.077339   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:33.077359   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:35.589167   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:35.599047   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:35.599105   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:35.624300   54101 cri.go:89] found id: ""
	I1212 00:23:35.624315   54101 logs.go:282] 0 containers: []
	W1212 00:23:35.624322   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:35.624327   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:35.624387   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:35.647815   54101 cri.go:89] found id: ""
	I1212 00:23:35.647829   54101 logs.go:282] 0 containers: []
	W1212 00:23:35.647837   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:35.647842   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:35.647900   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:35.676530   54101 cri.go:89] found id: ""
	I1212 00:23:35.676544   54101 logs.go:282] 0 containers: []
	W1212 00:23:35.676551   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:35.676556   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:35.676617   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:35.705816   54101 cri.go:89] found id: ""
	I1212 00:23:35.705831   54101 logs.go:282] 0 containers: []
	W1212 00:23:35.705838   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:35.705844   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:35.705903   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:35.733393   54101 cri.go:89] found id: ""
	I1212 00:23:35.733413   54101 logs.go:282] 0 containers: []
	W1212 00:23:35.733421   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:35.733426   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:35.733485   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:35.757717   54101 cri.go:89] found id: ""
	I1212 00:23:35.757731   54101 logs.go:282] 0 containers: []
	W1212 00:23:35.757738   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:35.757743   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:35.757800   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:35.782446   54101 cri.go:89] found id: ""
	I1212 00:23:35.782459   54101 logs.go:282] 0 containers: []
	W1212 00:23:35.782478   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:35.782487   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:35.782497   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:35.839811   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:35.839828   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:35.850443   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:35.850458   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:35.918359   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:35.910728   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:35.911186   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:35.912701   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:35.913021   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:35.914471   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:35.910728   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:35.911186   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:35.912701   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:35.913021   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:35.914471   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:35.918370   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:35.918382   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:35.980124   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:35.980143   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:38.530800   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:38.542531   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:38.542599   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:38.568754   54101 cri.go:89] found id: ""
	I1212 00:23:38.568767   54101 logs.go:282] 0 containers: []
	W1212 00:23:38.568774   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:38.568788   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:38.568846   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:38.598747   54101 cri.go:89] found id: ""
	I1212 00:23:38.598759   54101 logs.go:282] 0 containers: []
	W1212 00:23:38.598766   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:38.598771   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:38.598838   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:38.623489   54101 cri.go:89] found id: ""
	I1212 00:23:38.623503   54101 logs.go:282] 0 containers: []
	W1212 00:23:38.623519   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:38.623525   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:38.623594   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:38.648000   54101 cri.go:89] found id: ""
	I1212 00:23:38.648013   54101 logs.go:282] 0 containers: []
	W1212 00:23:38.648022   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:38.648027   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:38.648084   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:38.674721   54101 cri.go:89] found id: ""
	I1212 00:23:38.674734   54101 logs.go:282] 0 containers: []
	W1212 00:23:38.674741   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:38.674746   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:38.674808   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:38.700695   54101 cri.go:89] found id: ""
	I1212 00:23:38.700708   54101 logs.go:282] 0 containers: []
	W1212 00:23:38.700715   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:38.700720   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:38.700780   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:38.724873   54101 cri.go:89] found id: ""
	I1212 00:23:38.724886   54101 logs.go:282] 0 containers: []
	W1212 00:23:38.724892   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:38.724900   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:38.724910   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:38.751419   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:38.751434   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:38.807512   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:38.807530   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:38.818972   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:38.819002   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:38.889413   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:38.879843   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:38.881217   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:38.881803   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:38.883544   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:38.884066   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:38.879843   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:38.881217   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:38.881803   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:38.883544   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:38.884066   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:38.889425   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:38.889435   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:41.452716   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:41.462650   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:41.462718   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:41.487241   54101 cri.go:89] found id: ""
	I1212 00:23:41.487264   54101 logs.go:282] 0 containers: []
	W1212 00:23:41.487271   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:41.487277   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:41.487335   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:41.511441   54101 cri.go:89] found id: ""
	I1212 00:23:41.511454   54101 logs.go:282] 0 containers: []
	W1212 00:23:41.511461   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:41.511466   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:41.511523   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:41.560805   54101 cri.go:89] found id: ""
	I1212 00:23:41.560819   54101 logs.go:282] 0 containers: []
	W1212 00:23:41.560826   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:41.560831   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:41.560887   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:41.587388   54101 cri.go:89] found id: ""
	I1212 00:23:41.587402   54101 logs.go:282] 0 containers: []
	W1212 00:23:41.587408   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:41.587413   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:41.587469   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:41.611964   54101 cri.go:89] found id: ""
	I1212 00:23:41.611979   54101 logs.go:282] 0 containers: []
	W1212 00:23:41.611986   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:41.611991   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:41.612051   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:41.637582   54101 cri.go:89] found id: ""
	I1212 00:23:41.637595   54101 logs.go:282] 0 containers: []
	W1212 00:23:41.637601   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:41.637606   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:41.637662   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:41.660916   54101 cri.go:89] found id: ""
	I1212 00:23:41.660939   54101 logs.go:282] 0 containers: []
	W1212 00:23:41.660947   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:41.660955   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:41.660964   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:41.720148   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:41.720165   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:41.730670   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:41.730686   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:41.792978   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:41.784826   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:41.785364   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:41.786819   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:41.787322   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:41.788953   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:41.784826   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:41.785364   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:41.786819   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:41.787322   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:41.788953   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:41.792987   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:41.792997   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:41.853248   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:41.853264   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:44.384182   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:44.394508   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:44.394568   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:44.418597   54101 cri.go:89] found id: ""
	I1212 00:23:44.418612   54101 logs.go:282] 0 containers: []
	W1212 00:23:44.418619   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:44.418624   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:44.418681   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:44.443581   54101 cri.go:89] found id: ""
	I1212 00:23:44.443595   54101 logs.go:282] 0 containers: []
	W1212 00:23:44.443603   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:44.443608   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:44.443665   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:44.468881   54101 cri.go:89] found id: ""
	I1212 00:23:44.468895   54101 logs.go:282] 0 containers: []
	W1212 00:23:44.468902   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:44.468907   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:44.468965   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:44.493396   54101 cri.go:89] found id: ""
	I1212 00:23:44.493410   54101 logs.go:282] 0 containers: []
	W1212 00:23:44.493417   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:44.493422   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:44.493479   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:44.517484   54101 cri.go:89] found id: ""
	I1212 00:23:44.517498   54101 logs.go:282] 0 containers: []
	W1212 00:23:44.517505   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:44.517510   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:44.517570   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:44.550796   54101 cri.go:89] found id: ""
	I1212 00:23:44.550810   54101 logs.go:282] 0 containers: []
	W1212 00:23:44.550817   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:44.550822   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:44.550883   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:44.576925   54101 cri.go:89] found id: ""
	I1212 00:23:44.576938   54101 logs.go:282] 0 containers: []
	W1212 00:23:44.576946   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:44.576954   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:44.576964   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:44.589144   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:44.589160   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:44.657506   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:44.648963   13127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:44.649564   13127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:44.651341   13127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:44.651846   13127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:44.653593   13127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:44.648963   13127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:44.649564   13127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:44.651341   13127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:44.651846   13127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:44.653593   13127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:44.657515   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:44.657526   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:44.718495   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:44.718513   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:44.745494   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:44.745508   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:47.304216   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:47.314254   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:47.314318   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:47.339739   54101 cri.go:89] found id: ""
	I1212 00:23:47.339753   54101 logs.go:282] 0 containers: []
	W1212 00:23:47.339760   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:47.339766   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:47.339822   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:47.364136   54101 cri.go:89] found id: ""
	I1212 00:23:47.364150   54101 logs.go:282] 0 containers: []
	W1212 00:23:47.364157   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:47.364162   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:47.364226   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:47.387941   54101 cri.go:89] found id: ""
	I1212 00:23:47.387957   54101 logs.go:282] 0 containers: []
	W1212 00:23:47.387964   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:47.387969   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:47.388026   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:47.412100   54101 cri.go:89] found id: ""
	I1212 00:23:47.412114   54101 logs.go:282] 0 containers: []
	W1212 00:23:47.412121   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:47.412126   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:47.412187   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:47.437977   54101 cri.go:89] found id: ""
	I1212 00:23:47.437997   54101 logs.go:282] 0 containers: []
	W1212 00:23:47.438005   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:47.438011   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:47.438070   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:47.464751   54101 cri.go:89] found id: ""
	I1212 00:23:47.464765   54101 logs.go:282] 0 containers: []
	W1212 00:23:47.464772   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:47.464778   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:47.464834   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:47.492824   54101 cri.go:89] found id: ""
	I1212 00:23:47.492838   54101 logs.go:282] 0 containers: []
	W1212 00:23:47.492845   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:47.492853   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:47.492863   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:47.549187   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:47.549205   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:47.561345   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:47.561361   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:47.637229   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:47.628185   13238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:47.629565   13238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:47.630374   13238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:47.631980   13238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:47.632725   13238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:47.628185   13238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:47.629565   13238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:47.630374   13238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:47.631980   13238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:47.632725   13238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:47.637238   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:47.637249   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:47.700044   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:47.700063   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:50.232142   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:50.242326   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:50.242389   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:50.267337   54101 cri.go:89] found id: ""
	I1212 00:23:50.267351   54101 logs.go:282] 0 containers: []
	W1212 00:23:50.267359   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:50.267364   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:50.267424   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:50.294402   54101 cri.go:89] found id: ""
	I1212 00:23:50.294416   54101 logs.go:282] 0 containers: []
	W1212 00:23:50.294424   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:50.294428   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:50.294489   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:50.318907   54101 cri.go:89] found id: ""
	I1212 00:23:50.318921   54101 logs.go:282] 0 containers: []
	W1212 00:23:50.318928   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:50.318938   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:50.319041   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:50.344349   54101 cri.go:89] found id: ""
	I1212 00:23:50.344362   54101 logs.go:282] 0 containers: []
	W1212 00:23:50.344370   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:50.344375   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:50.344442   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:50.374529   54101 cri.go:89] found id: ""
	I1212 00:23:50.374543   54101 logs.go:282] 0 containers: []
	W1212 00:23:50.374550   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:50.374556   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:50.374612   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:50.400874   54101 cri.go:89] found id: ""
	I1212 00:23:50.400888   54101 logs.go:282] 0 containers: []
	W1212 00:23:50.400896   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:50.400903   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:50.400977   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:50.428510   54101 cri.go:89] found id: ""
	I1212 00:23:50.428525   54101 logs.go:282] 0 containers: []
	W1212 00:23:50.428533   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:50.428541   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:50.428553   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:50.455528   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:50.455545   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:50.510724   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:50.510743   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:50.521665   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:50.521681   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:50.611401   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:50.603277   13347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:50.603798   13347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:50.605445   13347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:50.605921   13347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:50.607608   13347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:50.603277   13347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:50.603798   13347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:50.605445   13347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:50.605921   13347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:50.607608   13347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:50.611411   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:50.611424   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:53.175490   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:53.185411   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:53.185474   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:53.209584   54101 cri.go:89] found id: ""
	I1212 00:23:53.209597   54101 logs.go:282] 0 containers: []
	W1212 00:23:53.209616   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:53.209628   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:53.209693   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:53.233686   54101 cri.go:89] found id: ""
	I1212 00:23:53.233700   54101 logs.go:282] 0 containers: []
	W1212 00:23:53.233707   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:53.233712   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:53.233774   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:53.257587   54101 cri.go:89] found id: ""
	I1212 00:23:53.257601   54101 logs.go:282] 0 containers: []
	W1212 00:23:53.257608   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:53.257613   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:53.257670   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:53.285867   54101 cri.go:89] found id: ""
	I1212 00:23:53.285880   54101 logs.go:282] 0 containers: []
	W1212 00:23:53.285887   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:53.285892   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:53.285947   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:53.312516   54101 cri.go:89] found id: ""
	I1212 00:23:53.312530   54101 logs.go:282] 0 containers: []
	W1212 00:23:53.312537   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:53.312541   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:53.312599   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:53.336425   54101 cri.go:89] found id: ""
	I1212 00:23:53.336445   54101 logs.go:282] 0 containers: []
	W1212 00:23:53.336452   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:53.336457   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:53.336514   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:53.360258   54101 cri.go:89] found id: ""
	I1212 00:23:53.360271   54101 logs.go:282] 0 containers: []
	W1212 00:23:53.360279   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:53.360287   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:53.360296   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:53.422643   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:53.422660   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:53.451682   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:53.451698   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:53.508302   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:53.508320   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:53.518839   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:53.518855   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:53.608163   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:53.599819   13452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:53.600615   13452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:53.602118   13452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:53.602666   13452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:53.604185   13452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:53.599819   13452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:53.600615   13452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:53.602118   13452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:53.602666   13452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:53.604185   13452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:56.109087   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:56.119165   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:56.119227   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:56.143243   54101 cri.go:89] found id: ""
	I1212 00:23:56.143256   54101 logs.go:282] 0 containers: []
	W1212 00:23:56.143263   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:56.143268   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:56.143326   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:56.168289   54101 cri.go:89] found id: ""
	I1212 00:23:56.168309   54101 logs.go:282] 0 containers: []
	W1212 00:23:56.168316   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:56.168321   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:56.168379   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:56.192149   54101 cri.go:89] found id: ""
	I1212 00:23:56.192163   54101 logs.go:282] 0 containers: []
	W1212 00:23:56.192172   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:56.192177   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:56.192238   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:56.216868   54101 cri.go:89] found id: ""
	I1212 00:23:56.216880   54101 logs.go:282] 0 containers: []
	W1212 00:23:56.216887   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:56.216892   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:56.216954   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:56.241928   54101 cri.go:89] found id: ""
	I1212 00:23:56.241941   54101 logs.go:282] 0 containers: []
	W1212 00:23:56.241951   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:56.241956   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:56.242011   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:56.265468   54101 cri.go:89] found id: ""
	I1212 00:23:56.265481   54101 logs.go:282] 0 containers: []
	W1212 00:23:56.265488   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:56.265493   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:56.265552   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:56.290530   54101 cri.go:89] found id: ""
	I1212 00:23:56.290544   54101 logs.go:282] 0 containers: []
	W1212 00:23:56.290551   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:56.290559   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:56.290569   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:56.345149   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:56.345167   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:56.355854   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:56.355869   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:56.418379   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:56.410553   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:56.411250   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:56.412854   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:56.413395   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:56.414621   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:56.410553   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:56.411250   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:56.412854   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:56.413395   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:56.414621   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:56.418389   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:56.418399   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:56.480524   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:56.480543   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:59.011832   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:59.022048   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:59.022108   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:59.046210   54101 cri.go:89] found id: ""
	I1212 00:23:59.046224   54101 logs.go:282] 0 containers: []
	W1212 00:23:59.046231   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:59.046236   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:59.046299   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:59.071192   54101 cri.go:89] found id: ""
	I1212 00:23:59.071206   54101 logs.go:282] 0 containers: []
	W1212 00:23:59.071213   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:59.071217   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:59.071278   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:59.095678   54101 cri.go:89] found id: ""
	I1212 00:23:59.095692   54101 logs.go:282] 0 containers: []
	W1212 00:23:59.095698   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:59.095703   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:59.095760   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:59.119812   54101 cri.go:89] found id: ""
	I1212 00:23:59.119825   54101 logs.go:282] 0 containers: []
	W1212 00:23:59.119832   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:59.119837   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:59.119897   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:59.143943   54101 cri.go:89] found id: ""
	I1212 00:23:59.143957   54101 logs.go:282] 0 containers: []
	W1212 00:23:59.143964   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:59.143969   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:59.144028   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:59.174483   54101 cri.go:89] found id: ""
	I1212 00:23:59.174506   54101 logs.go:282] 0 containers: []
	W1212 00:23:59.174513   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:59.174519   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:59.174576   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:59.202048   54101 cri.go:89] found id: ""
	I1212 00:23:59.202061   54101 logs.go:282] 0 containers: []
	W1212 00:23:59.202068   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:59.202076   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:59.202087   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:59.257143   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:59.257161   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:59.268235   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:59.268252   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:59.334149   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:59.326488   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:59.326882   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:59.328393   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:59.328789   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:59.330327   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:59.326488   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:59.326882   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:59.328393   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:59.328789   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:59.330327   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:59.334159   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:59.334184   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:59.396366   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:59.396383   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:01.926850   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:01.937253   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:01.937312   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:01.966272   54101 cri.go:89] found id: ""
	I1212 00:24:01.966286   54101 logs.go:282] 0 containers: []
	W1212 00:24:01.966293   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:01.966298   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:01.966359   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:01.991061   54101 cri.go:89] found id: ""
	I1212 00:24:01.991075   54101 logs.go:282] 0 containers: []
	W1212 00:24:01.991082   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:01.991087   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:01.991145   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:02.019646   54101 cri.go:89] found id: ""
	I1212 00:24:02.019661   54101 logs.go:282] 0 containers: []
	W1212 00:24:02.019668   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:02.019673   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:02.019731   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:02.044619   54101 cri.go:89] found id: ""
	I1212 00:24:02.044634   54101 logs.go:282] 0 containers: []
	W1212 00:24:02.044641   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:02.044648   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:02.044704   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:02.069486   54101 cri.go:89] found id: ""
	I1212 00:24:02.069500   54101 logs.go:282] 0 containers: []
	W1212 00:24:02.069508   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:02.069512   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:02.069569   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:02.096887   54101 cri.go:89] found id: ""
	I1212 00:24:02.096901   54101 logs.go:282] 0 containers: []
	W1212 00:24:02.096908   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:02.096913   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:02.096974   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:02.124826   54101 cri.go:89] found id: ""
	I1212 00:24:02.124839   54101 logs.go:282] 0 containers: []
	W1212 00:24:02.124847   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:02.124854   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:02.124864   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:02.152773   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:02.152789   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:02.210656   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:02.210676   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:02.222006   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:02.222022   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:02.293474   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:02.284427   13765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:02.285315   13765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:02.287050   13765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:02.287829   13765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:02.289483   13765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:02.284427   13765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:02.285315   13765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:02.287050   13765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:02.287829   13765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:02.289483   13765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:02.293484   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:02.293499   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:04.860582   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:04.870768   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:04.870829   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:04.896675   54101 cri.go:89] found id: ""
	I1212 00:24:04.896689   54101 logs.go:282] 0 containers: []
	W1212 00:24:04.896696   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:04.896701   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:04.896759   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:04.925636   54101 cri.go:89] found id: ""
	I1212 00:24:04.925651   54101 logs.go:282] 0 containers: []
	W1212 00:24:04.925658   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:04.925664   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:04.925730   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:04.950839   54101 cri.go:89] found id: ""
	I1212 00:24:04.950853   54101 logs.go:282] 0 containers: []
	W1212 00:24:04.950860   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:04.950865   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:04.950922   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:04.976777   54101 cri.go:89] found id: ""
	I1212 00:24:04.976792   54101 logs.go:282] 0 containers: []
	W1212 00:24:04.976799   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:04.976804   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:04.976862   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:05.007523   54101 cri.go:89] found id: ""
	I1212 00:24:05.007538   54101 logs.go:282] 0 containers: []
	W1212 00:24:05.007547   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:05.007552   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:05.007615   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:05.034390   54101 cri.go:89] found id: ""
	I1212 00:24:05.034412   54101 logs.go:282] 0 containers: []
	W1212 00:24:05.034419   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:05.034424   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:05.034492   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:05.060364   54101 cri.go:89] found id: ""
	I1212 00:24:05.060378   54101 logs.go:282] 0 containers: []
	W1212 00:24:05.060385   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:05.060394   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:05.060405   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:05.130824   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:05.122601   13853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:05.123172   13853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:05.124809   13853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:05.125287   13853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:05.126908   13853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:05.122601   13853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:05.123172   13853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:05.124809   13853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:05.125287   13853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:05.126908   13853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:05.130836   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:05.130846   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:05.193088   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:05.193106   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:05.221288   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:05.221305   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:05.280911   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:05.280928   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:07.791957   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:07.803197   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:07.803258   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:07.849866   54101 cri.go:89] found id: ""
	I1212 00:24:07.849879   54101 logs.go:282] 0 containers: []
	W1212 00:24:07.849885   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:07.849890   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:07.849944   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:07.879098   54101 cri.go:89] found id: ""
	I1212 00:24:07.879112   54101 logs.go:282] 0 containers: []
	W1212 00:24:07.879118   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:07.879123   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:07.879180   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:07.903042   54101 cri.go:89] found id: ""
	I1212 00:24:07.903056   54101 logs.go:282] 0 containers: []
	W1212 00:24:07.903063   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:07.903068   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:07.903124   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:07.926973   54101 cri.go:89] found id: ""
	I1212 00:24:07.926986   54101 logs.go:282] 0 containers: []
	W1212 00:24:07.927024   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:07.927029   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:07.927093   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:07.952849   54101 cri.go:89] found id: ""
	I1212 00:24:07.952863   54101 logs.go:282] 0 containers: []
	W1212 00:24:07.952870   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:07.952875   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:07.952937   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:07.976048   54101 cri.go:89] found id: ""
	I1212 00:24:07.976061   54101 logs.go:282] 0 containers: []
	W1212 00:24:07.976068   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:07.976073   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:07.976127   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:08.005144   54101 cri.go:89] found id: ""
	I1212 00:24:08.005157   54101 logs.go:282] 0 containers: []
	W1212 00:24:08.005165   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:08.005173   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:08.005183   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:08.062459   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:08.062477   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:08.073793   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:08.073821   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:08.140014   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:08.132203   13964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:08.132726   13964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:08.134246   13964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:08.134712   13964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:08.136200   13964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:08.132203   13964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:08.132726   13964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:08.134246   13964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:08.134712   13964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:08.136200   13964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:08.140025   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:08.140035   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:08.202051   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:08.202070   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:10.733798   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:10.743998   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:10.744057   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:10.768781   54101 cri.go:89] found id: ""
	I1212 00:24:10.768795   54101 logs.go:282] 0 containers: []
	W1212 00:24:10.768802   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:10.768807   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:10.768871   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:10.811478   54101 cri.go:89] found id: ""
	I1212 00:24:10.811492   54101 logs.go:282] 0 containers: []
	W1212 00:24:10.811499   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:10.811504   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:10.811570   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:10.842339   54101 cri.go:89] found id: ""
	I1212 00:24:10.842358   54101 logs.go:282] 0 containers: []
	W1212 00:24:10.842365   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:10.842370   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:10.842431   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:10.874129   54101 cri.go:89] found id: ""
	I1212 00:24:10.874143   54101 logs.go:282] 0 containers: []
	W1212 00:24:10.874151   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:10.874157   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:10.874217   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:10.898217   54101 cri.go:89] found id: ""
	I1212 00:24:10.898231   54101 logs.go:282] 0 containers: []
	W1212 00:24:10.898244   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:10.898249   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:10.898306   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:10.923360   54101 cri.go:89] found id: ""
	I1212 00:24:10.923374   54101 logs.go:282] 0 containers: []
	W1212 00:24:10.923380   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:10.923385   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:10.923442   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:10.947605   54101 cri.go:89] found id: ""
	I1212 00:24:10.947619   54101 logs.go:282] 0 containers: []
	W1212 00:24:10.947626   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:10.947634   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:10.947645   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:11.006969   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:11.006995   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:11.018264   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:11.018281   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:11.082660   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:11.073705   14067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:11.074224   14067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:11.075940   14067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:11.076685   14067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:11.078178   14067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:11.073705   14067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:11.074224   14067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:11.075940   14067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:11.076685   14067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:11.078178   14067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:11.082671   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:11.082681   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:11.144246   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:11.144263   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:13.671933   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:13.683185   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:13.683253   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:13.708906   54101 cri.go:89] found id: ""
	I1212 00:24:13.708920   54101 logs.go:282] 0 containers: []
	W1212 00:24:13.708927   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:13.708932   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:13.709070   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:13.733465   54101 cri.go:89] found id: ""
	I1212 00:24:13.733479   54101 logs.go:282] 0 containers: []
	W1212 00:24:13.733486   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:13.733491   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:13.733555   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:13.757055   54101 cri.go:89] found id: ""
	I1212 00:24:13.757069   54101 logs.go:282] 0 containers: []
	W1212 00:24:13.757076   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:13.757084   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:13.757142   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:13.781588   54101 cri.go:89] found id: ""
	I1212 00:24:13.781602   54101 logs.go:282] 0 containers: []
	W1212 00:24:13.781609   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:13.781614   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:13.781674   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:13.811312   54101 cri.go:89] found id: ""
	I1212 00:24:13.811325   54101 logs.go:282] 0 containers: []
	W1212 00:24:13.811333   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:13.811337   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:13.811394   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:13.844313   54101 cri.go:89] found id: ""
	I1212 00:24:13.844326   54101 logs.go:282] 0 containers: []
	W1212 00:24:13.844333   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:13.844338   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:13.844421   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:13.868420   54101 cri.go:89] found id: ""
	I1212 00:24:13.868434   54101 logs.go:282] 0 containers: []
	W1212 00:24:13.868441   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:13.868449   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:13.868459   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:13.923519   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:13.923536   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:13.934615   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:13.934631   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:14.000483   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:13.989816   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:13.990515   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:13.992025   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:13.992486   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:13.995350   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:13.989816   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:13.990515   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:13.992025   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:13.992486   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:13.995350   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:14.000493   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:14.000505   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:14.063145   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:14.063165   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:16.593154   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:16.603519   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:16.603584   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:16.632576   54101 cri.go:89] found id: ""
	I1212 00:24:16.632589   54101 logs.go:282] 0 containers: []
	W1212 00:24:16.632596   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:16.632603   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:16.632663   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:16.661504   54101 cri.go:89] found id: ""
	I1212 00:24:16.661518   54101 logs.go:282] 0 containers: []
	W1212 00:24:16.661525   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:16.661530   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:16.661587   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:16.686915   54101 cri.go:89] found id: ""
	I1212 00:24:16.686930   54101 logs.go:282] 0 containers: []
	W1212 00:24:16.686937   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:16.686942   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:16.687035   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:16.711579   54101 cri.go:89] found id: ""
	I1212 00:24:16.711594   54101 logs.go:282] 0 containers: []
	W1212 00:24:16.711601   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:16.711606   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:16.711664   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:16.735976   54101 cri.go:89] found id: ""
	I1212 00:24:16.735990   54101 logs.go:282] 0 containers: []
	W1212 00:24:16.735998   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:16.736003   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:16.736058   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:16.760337   54101 cri.go:89] found id: ""
	I1212 00:24:16.760351   54101 logs.go:282] 0 containers: []
	W1212 00:24:16.760359   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:16.760364   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:16.760429   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:16.787594   54101 cri.go:89] found id: ""
	I1212 00:24:16.787608   54101 logs.go:282] 0 containers: []
	W1212 00:24:16.787625   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:16.787634   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:16.787644   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:16.853787   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:16.853805   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:16.865402   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:16.865418   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:16.934251   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:16.925653   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:16.926416   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:16.928097   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:16.928745   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:16.930355   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:16.925653   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:16.926416   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:16.928097   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:16.928745   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:16.930355   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:16.934261   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:16.934272   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:16.995335   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:16.995360   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:19.530311   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:19.540648   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:19.540711   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:19.573854   54101 cri.go:89] found id: ""
	I1212 00:24:19.573868   54101 logs.go:282] 0 containers: []
	W1212 00:24:19.573875   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:19.573880   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:19.573938   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:19.598830   54101 cri.go:89] found id: ""
	I1212 00:24:19.598850   54101 logs.go:282] 0 containers: []
	W1212 00:24:19.598857   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:19.598862   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:19.598965   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:19.624335   54101 cri.go:89] found id: ""
	I1212 00:24:19.624349   54101 logs.go:282] 0 containers: []
	W1212 00:24:19.624357   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:19.624364   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:19.624451   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:19.650800   54101 cri.go:89] found id: ""
	I1212 00:24:19.650813   54101 logs.go:282] 0 containers: []
	W1212 00:24:19.650820   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:19.650826   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:19.650887   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:19.676025   54101 cri.go:89] found id: ""
	I1212 00:24:19.676038   54101 logs.go:282] 0 containers: []
	W1212 00:24:19.676046   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:19.676051   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:19.676111   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:19.702971   54101 cri.go:89] found id: ""
	I1212 00:24:19.702984   54101 logs.go:282] 0 containers: []
	W1212 00:24:19.703003   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:19.703008   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:19.703066   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:19.727517   54101 cri.go:89] found id: ""
	I1212 00:24:19.727530   54101 logs.go:282] 0 containers: []
	W1212 00:24:19.727537   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:19.727545   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:19.727558   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:19.784930   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:19.784948   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:19.799325   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:19.799340   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:19.872030   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:19.864278   14381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:19.865037   14381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:19.866546   14381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:19.866841   14381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:19.868283   14381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:19.864278   14381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:19.865037   14381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:19.866546   14381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:19.866841   14381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:19.868283   14381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:19.872041   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:19.872052   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:19.934549   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:19.934568   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:22.466009   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:22.476227   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:22.476288   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:22.501678   54101 cri.go:89] found id: ""
	I1212 00:24:22.501705   54101 logs.go:282] 0 containers: []
	W1212 00:24:22.501712   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:22.501717   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:22.501785   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:22.531238   54101 cri.go:89] found id: ""
	I1212 00:24:22.531251   54101 logs.go:282] 0 containers: []
	W1212 00:24:22.531258   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:22.531263   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:22.531321   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:22.554936   54101 cri.go:89] found id: ""
	I1212 00:24:22.554949   54101 logs.go:282] 0 containers: []
	W1212 00:24:22.554956   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:22.554962   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:22.555055   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:22.582980   54101 cri.go:89] found id: ""
	I1212 00:24:22.583017   54101 logs.go:282] 0 containers: []
	W1212 00:24:22.583025   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:22.583030   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:22.583094   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:22.608038   54101 cri.go:89] found id: ""
	I1212 00:24:22.608051   54101 logs.go:282] 0 containers: []
	W1212 00:24:22.608069   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:22.608074   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:22.608134   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:22.631929   54101 cri.go:89] found id: ""
	I1212 00:24:22.631942   54101 logs.go:282] 0 containers: []
	W1212 00:24:22.631959   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:22.631965   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:22.632035   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:22.660069   54101 cri.go:89] found id: ""
	I1212 00:24:22.660083   54101 logs.go:282] 0 containers: []
	W1212 00:24:22.660090   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:22.660107   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:22.660118   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:22.722675   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:22.714219   14476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:22.714970   14476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:22.716604   14476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:22.716888   14476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:22.718358   14476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:22.714219   14476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:22.714970   14476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:22.716604   14476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:22.716888   14476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:22.718358   14476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:22.722685   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:22.722695   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:22.783718   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:22.783736   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:22.815064   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:22.815082   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:22.876099   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:22.876117   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:25.389270   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:25.399208   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:25.399264   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:25.423023   54101 cri.go:89] found id: ""
	I1212 00:24:25.423036   54101 logs.go:282] 0 containers: []
	W1212 00:24:25.423043   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:25.423048   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:25.423110   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:25.447118   54101 cri.go:89] found id: ""
	I1212 00:24:25.447132   54101 logs.go:282] 0 containers: []
	W1212 00:24:25.447140   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:25.447145   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:25.447203   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:25.471506   54101 cri.go:89] found id: ""
	I1212 00:24:25.471520   54101 logs.go:282] 0 containers: []
	W1212 00:24:25.471527   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:25.471532   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:25.471588   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:25.496289   54101 cri.go:89] found id: ""
	I1212 00:24:25.496302   54101 logs.go:282] 0 containers: []
	W1212 00:24:25.496310   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:25.496315   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:25.496371   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:25.521055   54101 cri.go:89] found id: ""
	I1212 00:24:25.521068   54101 logs.go:282] 0 containers: []
	W1212 00:24:25.521075   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:25.521080   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:25.521136   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:25.545427   54101 cri.go:89] found id: ""
	I1212 00:24:25.545441   54101 logs.go:282] 0 containers: []
	W1212 00:24:25.545448   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:25.545453   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:25.545509   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:25.573059   54101 cri.go:89] found id: ""
	I1212 00:24:25.573073   54101 logs.go:282] 0 containers: []
	W1212 00:24:25.573080   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:25.573088   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:25.573098   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:25.627642   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:25.627661   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:25.638176   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:25.638192   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:25.702262   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:25.692958   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:25.693521   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:25.695662   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:25.696870   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:25.697262   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:25.692958   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:25.693521   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:25.695662   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:25.696870   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:25.697262   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:25.702271   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:25.702283   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:25.768032   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:25.768050   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:28.306236   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:28.316297   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:28.316366   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:28.339825   54101 cri.go:89] found id: ""
	I1212 00:24:28.339838   54101 logs.go:282] 0 containers: []
	W1212 00:24:28.339855   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:28.339860   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:28.339930   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:28.364813   54101 cri.go:89] found id: ""
	I1212 00:24:28.364826   54101 logs.go:282] 0 containers: []
	W1212 00:24:28.364832   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:28.364837   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:28.364902   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:28.398903   54101 cri.go:89] found id: ""
	I1212 00:24:28.398917   54101 logs.go:282] 0 containers: []
	W1212 00:24:28.398923   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:28.398928   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:28.398985   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:28.424563   54101 cri.go:89] found id: ""
	I1212 00:24:28.424577   54101 logs.go:282] 0 containers: []
	W1212 00:24:28.424584   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:28.424595   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:28.424652   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:28.448511   54101 cri.go:89] found id: ""
	I1212 00:24:28.448524   54101 logs.go:282] 0 containers: []
	W1212 00:24:28.448531   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:28.448536   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:28.448595   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:28.473282   54101 cri.go:89] found id: ""
	I1212 00:24:28.473295   54101 logs.go:282] 0 containers: []
	W1212 00:24:28.473303   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:28.473308   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:28.473364   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:28.496850   54101 cri.go:89] found id: ""
	I1212 00:24:28.496864   54101 logs.go:282] 0 containers: []
	W1212 00:24:28.496871   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:28.496879   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:28.496889   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:28.563054   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:28.554678   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:28.555432   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:28.557227   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:28.557770   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:28.559159   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:28.554678   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:28.555432   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:28.557227   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:28.557770   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:28.559159   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:28.563064   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:28.563076   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:28.625015   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:28.625034   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:28.656873   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:28.656887   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:28.714792   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:28.714811   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:31.225710   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:31.235567   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:31.235633   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:31.259473   54101 cri.go:89] found id: ""
	I1212 00:24:31.259487   54101 logs.go:282] 0 containers: []
	W1212 00:24:31.259494   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:31.259499   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:31.259556   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:31.284058   54101 cri.go:89] found id: ""
	I1212 00:24:31.284070   54101 logs.go:282] 0 containers: []
	W1212 00:24:31.284077   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:31.284082   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:31.284138   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:31.306894   54101 cri.go:89] found id: ""
	I1212 00:24:31.306907   54101 logs.go:282] 0 containers: []
	W1212 00:24:31.306914   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:31.306918   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:31.306978   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:31.334534   54101 cri.go:89] found id: ""
	I1212 00:24:31.334547   54101 logs.go:282] 0 containers: []
	W1212 00:24:31.334554   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:31.334559   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:31.334615   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:31.359236   54101 cri.go:89] found id: ""
	I1212 00:24:31.359250   54101 logs.go:282] 0 containers: []
	W1212 00:24:31.359258   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:31.359263   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:31.359321   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:31.383234   54101 cri.go:89] found id: ""
	I1212 00:24:31.383247   54101 logs.go:282] 0 containers: []
	W1212 00:24:31.383254   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:31.383259   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:31.383314   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:31.407612   54101 cri.go:89] found id: ""
	I1212 00:24:31.407624   54101 logs.go:282] 0 containers: []
	W1212 00:24:31.407631   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:31.407638   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:31.407650   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:31.470123   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:31.470142   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:31.497215   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:31.497231   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:31.553428   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:31.553445   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:31.564292   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:31.564307   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:31.630782   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:31.622216   14808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:31.622767   14808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:31.624650   14808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:31.625108   14808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:31.626798   14808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:31.622216   14808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:31.622767   14808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:31.624650   14808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:31.625108   14808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:31.626798   14808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:34.131141   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:34.141238   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:34.141296   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:34.166032   54101 cri.go:89] found id: ""
	I1212 00:24:34.166045   54101 logs.go:282] 0 containers: []
	W1212 00:24:34.166053   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:34.166057   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:34.166117   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:34.192065   54101 cri.go:89] found id: ""
	I1212 00:24:34.192079   54101 logs.go:282] 0 containers: []
	W1212 00:24:34.192086   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:34.192091   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:34.192146   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:34.216626   54101 cri.go:89] found id: ""
	I1212 00:24:34.216640   54101 logs.go:282] 0 containers: []
	W1212 00:24:34.216646   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:34.216652   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:34.216710   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:34.244975   54101 cri.go:89] found id: ""
	I1212 00:24:34.244989   54101 logs.go:282] 0 containers: []
	W1212 00:24:34.244997   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:34.245002   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:34.245058   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:34.269781   54101 cri.go:89] found id: ""
	I1212 00:24:34.269795   54101 logs.go:282] 0 containers: []
	W1212 00:24:34.269802   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:34.269807   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:34.269867   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:34.294651   54101 cri.go:89] found id: ""
	I1212 00:24:34.294664   54101 logs.go:282] 0 containers: []
	W1212 00:24:34.294672   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:34.294677   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:34.294740   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:34.319772   54101 cri.go:89] found id: ""
	I1212 00:24:34.319786   54101 logs.go:282] 0 containers: []
	W1212 00:24:34.319793   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:34.319801   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:34.319811   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:34.385955   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:34.377894   14892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:34.378715   14892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:34.380217   14892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:34.380694   14892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:34.382158   14892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:34.377894   14892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:34.378715   14892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:34.380217   14892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:34.380694   14892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:34.382158   14892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:34.385966   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:34.385976   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:34.451474   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:34.451493   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:34.478755   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:34.478770   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:34.538195   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:34.538217   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:37.049062   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:37.060494   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:37.060558   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:37.096756   54101 cri.go:89] found id: ""
	I1212 00:24:37.096769   54101 logs.go:282] 0 containers: []
	W1212 00:24:37.096776   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:37.096781   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:37.096857   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:37.123426   54101 cri.go:89] found id: ""
	I1212 00:24:37.123441   54101 logs.go:282] 0 containers: []
	W1212 00:24:37.123448   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:37.123453   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:37.123515   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:37.150366   54101 cri.go:89] found id: ""
	I1212 00:24:37.150379   54101 logs.go:282] 0 containers: []
	W1212 00:24:37.150387   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:37.150392   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:37.150455   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:37.176266   54101 cri.go:89] found id: ""
	I1212 00:24:37.176281   54101 logs.go:282] 0 containers: []
	W1212 00:24:37.176288   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:37.176293   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:37.176379   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:37.211184   54101 cri.go:89] found id: ""
	I1212 00:24:37.211198   54101 logs.go:282] 0 containers: []
	W1212 00:24:37.211205   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:37.211210   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:37.211278   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:37.235978   54101 cri.go:89] found id: ""
	I1212 00:24:37.235992   54101 logs.go:282] 0 containers: []
	W1212 00:24:37.235999   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:37.236005   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:37.236064   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:37.261068   54101 cri.go:89] found id: ""
	I1212 00:24:37.261082   54101 logs.go:282] 0 containers: []
	W1212 00:24:37.261089   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:37.261097   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:37.261107   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:37.318643   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:37.318661   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:37.329758   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:37.329780   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:37.396581   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:37.388347   15001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:37.388766   15001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:37.390448   15001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:37.390869   15001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:37.392485   15001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:37.388347   15001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:37.388766   15001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:37.390448   15001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:37.390869   15001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:37.392485   15001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:37.396591   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:37.396602   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:37.463371   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:37.463399   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:39.999532   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:40.021164   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:40.021239   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:40.055893   54101 cri.go:89] found id: ""
	I1212 00:24:40.055908   54101 logs.go:282] 0 containers: []
	W1212 00:24:40.055916   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:40.055921   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:40.055984   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:40.085805   54101 cri.go:89] found id: ""
	I1212 00:24:40.085821   54101 logs.go:282] 0 containers: []
	W1212 00:24:40.085831   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:40.085837   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:40.085902   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:40.113784   54101 cri.go:89] found id: ""
	I1212 00:24:40.113797   54101 logs.go:282] 0 containers: []
	W1212 00:24:40.113804   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:40.113809   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:40.113867   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:40.141930   54101 cri.go:89] found id: ""
	I1212 00:24:40.141945   54101 logs.go:282] 0 containers: []
	W1212 00:24:40.141954   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:40.141959   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:40.142018   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:40.168489   54101 cri.go:89] found id: ""
	I1212 00:24:40.168503   54101 logs.go:282] 0 containers: []
	W1212 00:24:40.168510   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:40.168515   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:40.168575   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:40.195479   54101 cri.go:89] found id: ""
	I1212 00:24:40.195494   54101 logs.go:282] 0 containers: []
	W1212 00:24:40.195501   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:40.195506   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:40.195572   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:40.225277   54101 cri.go:89] found id: ""
	I1212 00:24:40.225290   54101 logs.go:282] 0 containers: []
	W1212 00:24:40.225297   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:40.225305   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:40.225315   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:40.288821   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:40.280605   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:40.281157   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:40.282725   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:40.283252   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:40.284776   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:40.280605   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:40.281157   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:40.282725   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:40.283252   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:40.284776   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:40.288833   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:40.288842   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:40.351250   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:40.351269   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:40.379379   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:40.379395   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:40.435768   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:40.435785   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:42.948581   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:42.958923   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:42.958983   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:42.983729   54101 cri.go:89] found id: ""
	I1212 00:24:42.983743   54101 logs.go:282] 0 containers: []
	W1212 00:24:42.983757   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:42.983762   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:42.983823   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:43.015682   54101 cri.go:89] found id: ""
	I1212 00:24:43.015696   54101 logs.go:282] 0 containers: []
	W1212 00:24:43.015703   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:43.015708   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:43.015767   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:43.051631   54101 cri.go:89] found id: ""
	I1212 00:24:43.051644   54101 logs.go:282] 0 containers: []
	W1212 00:24:43.051658   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:43.051662   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:43.051723   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:43.088521   54101 cri.go:89] found id: ""
	I1212 00:24:43.088535   54101 logs.go:282] 0 containers: []
	W1212 00:24:43.088542   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:43.088547   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:43.088606   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:43.120828   54101 cri.go:89] found id: ""
	I1212 00:24:43.120842   54101 logs.go:282] 0 containers: []
	W1212 00:24:43.120848   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:43.120854   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:43.120916   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:43.146768   54101 cri.go:89] found id: ""
	I1212 00:24:43.146782   54101 logs.go:282] 0 containers: []
	W1212 00:24:43.146789   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:43.146794   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:43.146877   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:43.172067   54101 cri.go:89] found id: ""
	I1212 00:24:43.172081   54101 logs.go:282] 0 containers: []
	W1212 00:24:43.172089   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:43.172097   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:43.172107   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:43.183115   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:43.183131   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:43.245564   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:43.237027   15207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:43.237641   15207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:43.239314   15207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:43.239878   15207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:43.241570   15207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:43.237027   15207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:43.237641   15207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:43.239314   15207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:43.239878   15207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:43.241570   15207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:43.245574   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:43.245585   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:43.307071   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:43.307092   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:43.334124   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:43.334141   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:45.892688   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:45.902643   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:45.902701   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:45.927418   54101 cri.go:89] found id: ""
	I1212 00:24:45.927432   54101 logs.go:282] 0 containers: []
	W1212 00:24:45.927439   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:45.927444   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:45.927504   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:45.950969   54101 cri.go:89] found id: ""
	I1212 00:24:45.950982   54101 logs.go:282] 0 containers: []
	W1212 00:24:45.951005   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:45.951011   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:45.951068   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:45.977037   54101 cri.go:89] found id: ""
	I1212 00:24:45.977050   54101 logs.go:282] 0 containers: []
	W1212 00:24:45.977057   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:45.977062   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:45.977127   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:46.003570   54101 cri.go:89] found id: ""
	I1212 00:24:46.003587   54101 logs.go:282] 0 containers: []
	W1212 00:24:46.003594   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:46.003600   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:46.003668   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:46.035920   54101 cri.go:89] found id: ""
	I1212 00:24:46.035934   54101 logs.go:282] 0 containers: []
	W1212 00:24:46.035941   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:46.035946   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:46.036003   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:46.073828   54101 cri.go:89] found id: ""
	I1212 00:24:46.073842   54101 logs.go:282] 0 containers: []
	W1212 00:24:46.073849   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:46.073854   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:46.073911   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:46.106173   54101 cri.go:89] found id: ""
	I1212 00:24:46.106194   54101 logs.go:282] 0 containers: []
	W1212 00:24:46.106218   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:46.106226   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:46.106239   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:46.162624   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:46.162643   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:46.173580   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:46.173602   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:46.238544   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:46.230296   15317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:46.230879   15317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:46.232549   15317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:46.233036   15317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:46.234601   15317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:46.230296   15317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:46.230879   15317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:46.232549   15317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:46.233036   15317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:46.234601   15317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:46.238555   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:46.238566   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:46.301177   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:46.301195   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:48.831063   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:48.843168   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:48.843226   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:48.871581   54101 cri.go:89] found id: ""
	I1212 00:24:48.871598   54101 logs.go:282] 0 containers: []
	W1212 00:24:48.871605   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:48.871610   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:48.871669   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:48.896221   54101 cri.go:89] found id: ""
	I1212 00:24:48.896236   54101 logs.go:282] 0 containers: []
	W1212 00:24:48.896244   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:48.896249   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:48.896307   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:48.920455   54101 cri.go:89] found id: ""
	I1212 00:24:48.920475   54101 logs.go:282] 0 containers: []
	W1212 00:24:48.920483   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:48.920488   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:48.920550   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:48.944730   54101 cri.go:89] found id: ""
	I1212 00:24:48.944743   54101 logs.go:282] 0 containers: []
	W1212 00:24:48.944750   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:48.944755   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:48.944815   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:48.969159   54101 cri.go:89] found id: ""
	I1212 00:24:48.969172   54101 logs.go:282] 0 containers: []
	W1212 00:24:48.969179   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:48.969184   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:48.969238   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:49.001344   54101 cri.go:89] found id: ""
	I1212 00:24:49.001360   54101 logs.go:282] 0 containers: []
	W1212 00:24:49.001368   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:49.001373   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:49.001440   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:49.026664   54101 cri.go:89] found id: ""
	I1212 00:24:49.026688   54101 logs.go:282] 0 containers: []
	W1212 00:24:49.026696   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:49.026704   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:49.026715   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:49.088266   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:49.088284   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:49.099424   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:49.099438   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:49.166422   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:49.157832   15425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:49.158583   15425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:49.160190   15425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:49.160890   15425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:49.162624   15425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:49.157832   15425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:49.158583   15425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:49.160190   15425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:49.160890   15425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:49.162624   15425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:49.166432   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:49.166445   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:49.227337   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:49.227355   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:51.758903   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:51.768725   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:51.768786   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:51.792403   54101 cri.go:89] found id: ""
	I1212 00:24:51.792417   54101 logs.go:282] 0 containers: []
	W1212 00:24:51.792424   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:51.792429   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:51.792497   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:51.819996   54101 cri.go:89] found id: ""
	I1212 00:24:51.820010   54101 logs.go:282] 0 containers: []
	W1212 00:24:51.820016   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:51.820021   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:51.820080   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:51.844706   54101 cri.go:89] found id: ""
	I1212 00:24:51.844719   54101 logs.go:282] 0 containers: []
	W1212 00:24:51.844727   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:51.844732   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:51.844800   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:51.870289   54101 cri.go:89] found id: ""
	I1212 00:24:51.870303   54101 logs.go:282] 0 containers: []
	W1212 00:24:51.870316   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:51.870321   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:51.870378   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:51.894116   54101 cri.go:89] found id: ""
	I1212 00:24:51.894129   54101 logs.go:282] 0 containers: []
	W1212 00:24:51.894137   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:51.894142   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:51.894200   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:51.918453   54101 cri.go:89] found id: ""
	I1212 00:24:51.918467   54101 logs.go:282] 0 containers: []
	W1212 00:24:51.918474   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:51.918480   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:51.918538   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:51.942207   54101 cri.go:89] found id: ""
	I1212 00:24:51.942220   54101 logs.go:282] 0 containers: []
	W1212 00:24:51.942228   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:51.942235   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:51.942245   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:51.970818   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:51.970835   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:52.026675   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:52.026692   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:52.044175   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:52.044191   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:52.123266   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:52.114940   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:52.115962   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:52.117604   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:52.118040   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:52.119539   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:52.114940   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:52.115962   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:52.117604   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:52.118040   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:52.119539   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:52.123275   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:52.123286   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:54.689949   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:54.700000   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:54.700070   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:54.725625   54101 cri.go:89] found id: ""
	I1212 00:24:54.725638   54101 logs.go:282] 0 containers: []
	W1212 00:24:54.725645   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:54.725650   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:54.725716   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:54.748579   54101 cri.go:89] found id: ""
	I1212 00:24:54.748592   54101 logs.go:282] 0 containers: []
	W1212 00:24:54.748600   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:54.748604   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:54.748661   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:54.772796   54101 cri.go:89] found id: ""
	I1212 00:24:54.772809   54101 logs.go:282] 0 containers: []
	W1212 00:24:54.772816   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:54.772821   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:54.772876   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:54.797082   54101 cri.go:89] found id: ""
	I1212 00:24:54.797095   54101 logs.go:282] 0 containers: []
	W1212 00:24:54.797102   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:54.797107   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:54.797168   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:54.821359   54101 cri.go:89] found id: ""
	I1212 00:24:54.821372   54101 logs.go:282] 0 containers: []
	W1212 00:24:54.821379   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:54.821384   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:54.821441   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:54.848911   54101 cri.go:89] found id: ""
	I1212 00:24:54.848924   54101 logs.go:282] 0 containers: []
	W1212 00:24:54.848931   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:54.848936   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:54.848993   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:54.872383   54101 cri.go:89] found id: ""
	I1212 00:24:54.872397   54101 logs.go:282] 0 containers: []
	W1212 00:24:54.872404   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:54.872412   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:54.872422   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:54.927404   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:54.927423   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:54.938083   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:54.938099   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:55.013009   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:54.998953   15629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:55.000234   15629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:55.001265   15629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:55.004572   15629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:55.007712   15629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:54.998953   15629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:55.000234   15629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:55.001265   15629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:55.004572   15629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:55.007712   15629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:55.013021   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:55.013032   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:55.084355   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:55.084375   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:57.624991   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:57.635207   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:57.635270   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:57.662282   54101 cri.go:89] found id: ""
	I1212 00:24:57.662296   54101 logs.go:282] 0 containers: []
	W1212 00:24:57.662304   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:57.662309   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:57.662365   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:57.692048   54101 cri.go:89] found id: ""
	I1212 00:24:57.692061   54101 logs.go:282] 0 containers: []
	W1212 00:24:57.692068   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:57.692073   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:57.692128   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:57.717665   54101 cri.go:89] found id: ""
	I1212 00:24:57.717679   54101 logs.go:282] 0 containers: []
	W1212 00:24:57.717686   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:57.717692   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:57.717752   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:57.746206   54101 cri.go:89] found id: ""
	I1212 00:24:57.746219   54101 logs.go:282] 0 containers: []
	W1212 00:24:57.746226   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:57.746233   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:57.746291   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:57.772883   54101 cri.go:89] found id: ""
	I1212 00:24:57.772896   54101 logs.go:282] 0 containers: []
	W1212 00:24:57.772904   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:57.772909   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:57.772969   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:57.796550   54101 cri.go:89] found id: ""
	I1212 00:24:57.796564   54101 logs.go:282] 0 containers: []
	W1212 00:24:57.796571   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:57.796576   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:57.796636   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:57.819457   54101 cri.go:89] found id: ""
	I1212 00:24:57.819470   54101 logs.go:282] 0 containers: []
	W1212 00:24:57.819481   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:57.819489   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:57.819499   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:57.848789   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:57.848804   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:57.903379   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:57.903404   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:57.914134   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:57.914150   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:57.981734   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:57.973800   15746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:57.974813   15746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:57.975633   15746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:57.976681   15746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:57.977401   15746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:57.973800   15746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:57.974813   15746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:57.975633   15746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:57.976681   15746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:57.977401   15746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:57.981743   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:57.981764   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:00.548466   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:00.559868   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:00.559941   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:00.588355   54101 cri.go:89] found id: ""
	I1212 00:25:00.588369   54101 logs.go:282] 0 containers: []
	W1212 00:25:00.588377   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:00.588383   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:00.588446   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:00.615059   54101 cri.go:89] found id: ""
	I1212 00:25:00.615073   54101 logs.go:282] 0 containers: []
	W1212 00:25:00.615080   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:00.615085   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:00.615144   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:00.642285   54101 cri.go:89] found id: ""
	I1212 00:25:00.642299   54101 logs.go:282] 0 containers: []
	W1212 00:25:00.642307   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:00.642312   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:00.642370   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:00.670680   54101 cri.go:89] found id: ""
	I1212 00:25:00.670693   54101 logs.go:282] 0 containers: []
	W1212 00:25:00.670701   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:00.670706   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:00.670766   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:00.696244   54101 cri.go:89] found id: ""
	I1212 00:25:00.696258   54101 logs.go:282] 0 containers: []
	W1212 00:25:00.696266   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:00.696271   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:00.696386   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:00.725727   54101 cri.go:89] found id: ""
	I1212 00:25:00.725741   54101 logs.go:282] 0 containers: []
	W1212 00:25:00.725758   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:00.725764   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:00.725844   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:00.754004   54101 cri.go:89] found id: ""
	I1212 00:25:00.754018   54101 logs.go:282] 0 containers: []
	W1212 00:25:00.754025   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:00.754032   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:00.754044   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:00.766092   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:00.766108   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:00.830876   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:00.822487   15839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:00.823145   15839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:00.824701   15839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:00.825291   15839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:00.826797   15839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:00.822487   15839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:00.823145   15839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:00.824701   15839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:00.825291   15839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:00.826797   15839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:00.830886   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:00.830899   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:00.893247   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:00.893265   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:00.920729   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:00.920744   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:03.481388   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:03.491775   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:03.491838   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:03.521216   54101 cri.go:89] found id: ""
	I1212 00:25:03.521230   54101 logs.go:282] 0 containers: []
	W1212 00:25:03.521238   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:03.521243   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:03.521304   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:03.549226   54101 cri.go:89] found id: ""
	I1212 00:25:03.549240   54101 logs.go:282] 0 containers: []
	W1212 00:25:03.549247   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:03.549258   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:03.549315   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:03.577069   54101 cri.go:89] found id: ""
	I1212 00:25:03.577083   54101 logs.go:282] 0 containers: []
	W1212 00:25:03.577090   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:03.577097   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:03.577156   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:03.606566   54101 cri.go:89] found id: ""
	I1212 00:25:03.606580   54101 logs.go:282] 0 containers: []
	W1212 00:25:03.606587   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:03.606592   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:03.606652   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:03.631034   54101 cri.go:89] found id: ""
	I1212 00:25:03.631049   54101 logs.go:282] 0 containers: []
	W1212 00:25:03.631057   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:03.631062   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:03.631125   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:03.655850   54101 cri.go:89] found id: ""
	I1212 00:25:03.655864   54101 logs.go:282] 0 containers: []
	W1212 00:25:03.655871   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:03.655876   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:03.655951   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:03.682159   54101 cri.go:89] found id: ""
	I1212 00:25:03.682173   54101 logs.go:282] 0 containers: []
	W1212 00:25:03.682180   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:03.682187   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:03.682200   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:03.692956   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:03.692973   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:03.759732   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:03.751026   15944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:03.751692   15944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:03.753437   15944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:03.754061   15944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:03.755694   15944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:03.751026   15944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:03.751692   15944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:03.753437   15944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:03.754061   15944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:03.755694   15944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:03.759743   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:03.759754   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:03.821448   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:03.821467   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:03.854174   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:03.854191   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:06.412785   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:06.423128   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:06.423192   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:06.451062   54101 cri.go:89] found id: ""
	I1212 00:25:06.451075   54101 logs.go:282] 0 containers: []
	W1212 00:25:06.451082   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:06.451087   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:06.451145   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:06.476861   54101 cri.go:89] found id: ""
	I1212 00:25:06.476875   54101 logs.go:282] 0 containers: []
	W1212 00:25:06.476882   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:06.476888   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:06.476956   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:06.502250   54101 cri.go:89] found id: ""
	I1212 00:25:06.502277   54101 logs.go:282] 0 containers: []
	W1212 00:25:06.502284   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:06.502295   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:06.502363   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:06.527789   54101 cri.go:89] found id: ""
	I1212 00:25:06.527803   54101 logs.go:282] 0 containers: []
	W1212 00:25:06.527810   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:06.527816   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:06.527876   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:06.552928   54101 cri.go:89] found id: ""
	I1212 00:25:06.552942   54101 logs.go:282] 0 containers: []
	W1212 00:25:06.552950   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:06.552956   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:06.553015   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:06.580455   54101 cri.go:89] found id: ""
	I1212 00:25:06.580468   54101 logs.go:282] 0 containers: []
	W1212 00:25:06.580475   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:06.580481   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:06.580541   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:06.605618   54101 cri.go:89] found id: ""
	I1212 00:25:06.605632   54101 logs.go:282] 0 containers: []
	W1212 00:25:06.605640   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:06.605656   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:06.605667   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:06.661856   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:06.661873   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:06.673040   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:06.673057   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:06.744531   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:06.737026   16050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:06.737431   16050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:06.738919   16050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:06.739260   16050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:06.740703   16050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:06.737026   16050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:06.737431   16050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:06.738919   16050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:06.739260   16050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:06.740703   16050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:06.744541   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:06.744552   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:06.810963   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:06.810982   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:09.340882   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:09.351148   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:09.351207   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:09.376060   54101 cri.go:89] found id: ""
	I1212 00:25:09.376074   54101 logs.go:282] 0 containers: []
	W1212 00:25:09.376081   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:09.376086   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:09.376144   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:09.401509   54101 cri.go:89] found id: ""
	I1212 00:25:09.401524   54101 logs.go:282] 0 containers: []
	W1212 00:25:09.401532   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:09.401537   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:09.401594   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:09.430682   54101 cri.go:89] found id: ""
	I1212 00:25:09.430697   54101 logs.go:282] 0 containers: []
	W1212 00:25:09.430704   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:09.430709   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:09.430779   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:09.455570   54101 cri.go:89] found id: ""
	I1212 00:25:09.455583   54101 logs.go:282] 0 containers: []
	W1212 00:25:09.455590   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:09.455596   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:09.455652   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:09.480221   54101 cri.go:89] found id: ""
	I1212 00:25:09.480234   54101 logs.go:282] 0 containers: []
	W1212 00:25:09.480251   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:09.480257   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:09.480312   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:09.504553   54101 cri.go:89] found id: ""
	I1212 00:25:09.504566   54101 logs.go:282] 0 containers: []
	W1212 00:25:09.504573   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:09.504578   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:09.504634   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:09.529091   54101 cri.go:89] found id: ""
	I1212 00:25:09.529105   54101 logs.go:282] 0 containers: []
	W1212 00:25:09.529111   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:09.529119   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:09.529129   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:09.590147   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:09.590169   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:09.616705   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:09.616720   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:09.674296   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:09.674314   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:09.685008   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:09.685023   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:09.747995   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:09.740039   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:09.740945   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:09.742442   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:09.742752   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:09.744216   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:09.740039   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:09.740945   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:09.742442   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:09.742752   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:09.744216   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:12.248240   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:12.258577   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:12.258636   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:12.296410   54101 cri.go:89] found id: ""
	I1212 00:25:12.296425   54101 logs.go:282] 0 containers: []
	W1212 00:25:12.296432   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:12.296438   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:12.296495   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:12.322054   54101 cri.go:89] found id: ""
	I1212 00:25:12.322069   54101 logs.go:282] 0 containers: []
	W1212 00:25:12.322076   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:12.322081   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:12.322137   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:12.354557   54101 cri.go:89] found id: ""
	I1212 00:25:12.354570   54101 logs.go:282] 0 containers: []
	W1212 00:25:12.354577   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:12.354582   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:12.354643   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:12.379214   54101 cri.go:89] found id: ""
	I1212 00:25:12.379228   54101 logs.go:282] 0 containers: []
	W1212 00:25:12.379235   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:12.379240   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:12.379297   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:12.403239   54101 cri.go:89] found id: ""
	I1212 00:25:12.403253   54101 logs.go:282] 0 containers: []
	W1212 00:25:12.403261   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:12.403266   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:12.403325   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:12.429024   54101 cri.go:89] found id: ""
	I1212 00:25:12.429039   54101 logs.go:282] 0 containers: []
	W1212 00:25:12.429052   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:12.429058   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:12.429117   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:12.454240   54101 cri.go:89] found id: ""
	I1212 00:25:12.454253   54101 logs.go:282] 0 containers: []
	W1212 00:25:12.454260   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:12.454268   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:12.454279   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:12.465168   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:12.465185   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:12.530196   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:12.522373   16258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:12.522762   16258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:12.524330   16258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:12.524677   16258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:12.526171   16258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:12.522373   16258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:12.522762   16258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:12.524330   16258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:12.524677   16258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:12.526171   16258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:12.530207   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:12.530218   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:12.596659   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:12.596686   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:12.629646   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:12.629666   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:15.188117   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:15.198184   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:15.198246   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:15.222760   54101 cri.go:89] found id: ""
	I1212 00:25:15.222774   54101 logs.go:282] 0 containers: []
	W1212 00:25:15.222781   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:15.222786   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:15.222841   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:15.247134   54101 cri.go:89] found id: ""
	I1212 00:25:15.247149   54101 logs.go:282] 0 containers: []
	W1212 00:25:15.247156   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:15.247161   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:15.247220   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:15.273493   54101 cri.go:89] found id: ""
	I1212 00:25:15.273506   54101 logs.go:282] 0 containers: []
	W1212 00:25:15.273513   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:15.273518   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:15.273575   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:15.325769   54101 cri.go:89] found id: ""
	I1212 00:25:15.325782   54101 logs.go:282] 0 containers: []
	W1212 00:25:15.325790   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:15.325794   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:15.325851   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:15.352564   54101 cri.go:89] found id: ""
	I1212 00:25:15.352578   54101 logs.go:282] 0 containers: []
	W1212 00:25:15.352589   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:15.352594   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:15.352652   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:15.381006   54101 cri.go:89] found id: ""
	I1212 00:25:15.381025   54101 logs.go:282] 0 containers: []
	W1212 00:25:15.381032   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:15.381037   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:15.381094   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:15.404889   54101 cri.go:89] found id: ""
	I1212 00:25:15.404903   54101 logs.go:282] 0 containers: []
	W1212 00:25:15.404910   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:15.404917   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:15.404936   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:15.472619   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:15.464098   16363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:15.465350   16363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:15.466018   16363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:15.467674   16363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:15.468107   16363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:15.464098   16363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:15.465350   16363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:15.466018   16363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:15.467674   16363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:15.468107   16363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:15.472631   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:15.472643   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:15.533279   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:15.533297   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:15.563170   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:15.563185   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:15.622483   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:15.622499   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:18.135301   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:18.145599   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:18.145657   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:18.170223   54101 cri.go:89] found id: ""
	I1212 00:25:18.170237   54101 logs.go:282] 0 containers: []
	W1212 00:25:18.170245   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:18.170250   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:18.170317   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:18.194981   54101 cri.go:89] found id: ""
	I1212 00:25:18.195034   54101 logs.go:282] 0 containers: []
	W1212 00:25:18.195042   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:18.195047   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:18.195107   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:18.219741   54101 cri.go:89] found id: ""
	I1212 00:25:18.219754   54101 logs.go:282] 0 containers: []
	W1212 00:25:18.219762   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:18.219767   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:18.219836   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:18.244023   54101 cri.go:89] found id: ""
	I1212 00:25:18.244036   54101 logs.go:282] 0 containers: []
	W1212 00:25:18.244043   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:18.244048   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:18.244105   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:18.268830   54101 cri.go:89] found id: ""
	I1212 00:25:18.268844   54101 logs.go:282] 0 containers: []
	W1212 00:25:18.268852   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:18.268857   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:18.268920   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:18.308533   54101 cri.go:89] found id: ""
	I1212 00:25:18.308547   54101 logs.go:282] 0 containers: []
	W1212 00:25:18.308553   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:18.308558   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:18.308618   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:18.342407   54101 cri.go:89] found id: ""
	I1212 00:25:18.342420   54101 logs.go:282] 0 containers: []
	W1212 00:25:18.342426   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:18.342434   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:18.342444   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:18.411629   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:18.403777   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:18.404392   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:18.405943   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:18.406371   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:18.407842   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:18.403777   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:18.404392   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:18.405943   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:18.406371   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:18.407842   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:18.411640   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:18.411652   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:18.476356   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:18.476375   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:18.508597   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:18.508613   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:18.565071   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:18.565088   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:21.075765   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:21.087124   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:21.087190   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:21.116451   54101 cri.go:89] found id: ""
	I1212 00:25:21.116465   54101 logs.go:282] 0 containers: []
	W1212 00:25:21.116472   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:21.116477   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:21.116540   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:21.142594   54101 cri.go:89] found id: ""
	I1212 00:25:21.142607   54101 logs.go:282] 0 containers: []
	W1212 00:25:21.142615   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:21.142620   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:21.142678   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:21.167624   54101 cri.go:89] found id: ""
	I1212 00:25:21.167638   54101 logs.go:282] 0 containers: []
	W1212 00:25:21.167646   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:21.167651   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:21.167709   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:21.195907   54101 cri.go:89] found id: ""
	I1212 00:25:21.195921   54101 logs.go:282] 0 containers: []
	W1212 00:25:21.195927   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:21.195932   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:21.195987   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:21.220794   54101 cri.go:89] found id: ""
	I1212 00:25:21.220808   54101 logs.go:282] 0 containers: []
	W1212 00:25:21.220816   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:21.220821   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:21.220880   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:21.246438   54101 cri.go:89] found id: ""
	I1212 00:25:21.246451   54101 logs.go:282] 0 containers: []
	W1212 00:25:21.246462   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:21.246473   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:21.246531   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:21.271784   54101 cri.go:89] found id: ""
	I1212 00:25:21.271799   54101 logs.go:282] 0 containers: []
	W1212 00:25:21.271806   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:21.271814   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:21.271833   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:21.315787   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:21.315812   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:21.377319   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:21.377338   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:21.388870   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:21.388885   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:21.453883   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:21.444432   16587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:21.445344   16587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:21.446969   16587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:21.447534   16587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:21.449241   16587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:21.444432   16587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:21.445344   16587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:21.446969   16587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:21.447534   16587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:21.449241   16587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:21.453893   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:21.453904   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:24.019730   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:24.030732   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:24.030792   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:24.057384   54101 cri.go:89] found id: ""
	I1212 00:25:24.057397   54101 logs.go:282] 0 containers: []
	W1212 00:25:24.057404   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:24.057410   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:24.057467   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:24.087868   54101 cri.go:89] found id: ""
	I1212 00:25:24.087883   54101 logs.go:282] 0 containers: []
	W1212 00:25:24.087891   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:24.087896   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:24.087960   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:24.112813   54101 cri.go:89] found id: ""
	I1212 00:25:24.112827   54101 logs.go:282] 0 containers: []
	W1212 00:25:24.112835   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:24.112840   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:24.112900   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:24.141527   54101 cri.go:89] found id: ""
	I1212 00:25:24.141541   54101 logs.go:282] 0 containers: []
	W1212 00:25:24.141548   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:24.141553   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:24.141612   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:24.171422   54101 cri.go:89] found id: ""
	I1212 00:25:24.171436   54101 logs.go:282] 0 containers: []
	W1212 00:25:24.171444   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:24.171449   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:24.171506   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:24.196733   54101 cri.go:89] found id: ""
	I1212 00:25:24.196758   54101 logs.go:282] 0 containers: []
	W1212 00:25:24.196767   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:24.196772   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:24.196840   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:24.221142   54101 cri.go:89] found id: ""
	I1212 00:25:24.221163   54101 logs.go:282] 0 containers: []
	W1212 00:25:24.221170   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:24.221178   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:24.221188   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:24.280043   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:24.280061   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:24.294333   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:24.294347   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:24.376651   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:24.368398   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:24.368936   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:24.370665   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:24.371218   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:24.372749   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:24.368398   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:24.368936   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:24.370665   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:24.371218   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:24.372749   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:24.376660   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:24.376670   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:24.442437   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:24.442455   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:26.972180   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:26.982717   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:26.982778   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:27.016302   54101 cri.go:89] found id: ""
	I1212 00:25:27.016317   54101 logs.go:282] 0 containers: []
	W1212 00:25:27.016324   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:27.016329   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:27.016390   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:27.041562   54101 cri.go:89] found id: ""
	I1212 00:25:27.041576   54101 logs.go:282] 0 containers: []
	W1212 00:25:27.041583   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:27.041588   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:27.041647   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:27.067288   54101 cri.go:89] found id: ""
	I1212 00:25:27.067301   54101 logs.go:282] 0 containers: []
	W1212 00:25:27.067308   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:27.067313   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:27.067370   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:27.093958   54101 cri.go:89] found id: ""
	I1212 00:25:27.093978   54101 logs.go:282] 0 containers: []
	W1212 00:25:27.093985   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:27.093990   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:27.094046   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:27.119290   54101 cri.go:89] found id: ""
	I1212 00:25:27.119303   54101 logs.go:282] 0 containers: []
	W1212 00:25:27.119310   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:27.119321   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:27.119378   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:27.147433   54101 cri.go:89] found id: ""
	I1212 00:25:27.147446   54101 logs.go:282] 0 containers: []
	W1212 00:25:27.147452   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:27.147457   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:27.147513   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:27.172138   54101 cri.go:89] found id: ""
	I1212 00:25:27.172152   54101 logs.go:282] 0 containers: []
	W1212 00:25:27.172159   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:27.172167   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:27.172177   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:27.228777   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:27.228797   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:27.240006   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:27.240021   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:27.317423   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:27.308478   16785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:27.309592   16785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:27.311317   16785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:27.311656   16785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:27.313135   16785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:27.308478   16785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:27.309592   16785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:27.311317   16785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:27.311656   16785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:27.313135   16785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:27.317433   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:27.317444   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:27.386770   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:27.386790   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:29.918004   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:29.928163   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:29.928225   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:29.957041   54101 cri.go:89] found id: ""
	I1212 00:25:29.957055   54101 logs.go:282] 0 containers: []
	W1212 00:25:29.957062   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:29.957067   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:29.957124   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:29.982223   54101 cri.go:89] found id: ""
	I1212 00:25:29.982237   54101 logs.go:282] 0 containers: []
	W1212 00:25:29.982244   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:29.982249   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:29.982306   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:30.021601   54101 cri.go:89] found id: ""
	I1212 00:25:30.021616   54101 logs.go:282] 0 containers: []
	W1212 00:25:30.021625   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:30.021630   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:30.021707   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:30.065430   54101 cri.go:89] found id: ""
	I1212 00:25:30.065447   54101 logs.go:282] 0 containers: []
	W1212 00:25:30.065456   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:30.065462   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:30.065547   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:30.094609   54101 cri.go:89] found id: ""
	I1212 00:25:30.094623   54101 logs.go:282] 0 containers: []
	W1212 00:25:30.094630   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:30.094635   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:30.094695   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:30.122604   54101 cri.go:89] found id: ""
	I1212 00:25:30.122618   54101 logs.go:282] 0 containers: []
	W1212 00:25:30.122626   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:30.122631   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:30.122690   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:30.148645   54101 cri.go:89] found id: ""
	I1212 00:25:30.148659   54101 logs.go:282] 0 containers: []
	W1212 00:25:30.148667   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:30.148675   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:30.148685   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:30.206432   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:30.206452   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:30.218454   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:30.218469   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:30.284319   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:30.274262   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:30.275194   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:30.276848   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:30.277482   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:30.278689   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:30.274262   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:30.275194   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:30.276848   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:30.277482   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:30.278689   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:30.284328   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:30.284339   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:30.356346   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:30.356372   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:32.883437   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:32.893868   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:32.893927   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:32.917839   54101 cri.go:89] found id: ""
	I1212 00:25:32.917852   54101 logs.go:282] 0 containers: []
	W1212 00:25:32.917859   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:32.917865   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:32.917931   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:32.942885   54101 cri.go:89] found id: ""
	I1212 00:25:32.942899   54101 logs.go:282] 0 containers: []
	W1212 00:25:32.942906   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:32.942911   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:32.942974   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:32.968519   54101 cri.go:89] found id: ""
	I1212 00:25:32.968532   54101 logs.go:282] 0 containers: []
	W1212 00:25:32.968539   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:32.968544   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:32.968602   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:33.004343   54101 cri.go:89] found id: ""
	I1212 00:25:33.004357   54101 logs.go:282] 0 containers: []
	W1212 00:25:33.004365   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:33.004370   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:33.004440   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:33.033496   54101 cri.go:89] found id: ""
	I1212 00:25:33.033510   54101 logs.go:282] 0 containers: []
	W1212 00:25:33.033524   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:33.033530   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:33.033590   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:33.061868   54101 cri.go:89] found id: ""
	I1212 00:25:33.061890   54101 logs.go:282] 0 containers: []
	W1212 00:25:33.061898   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:33.061903   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:33.061969   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:33.088616   54101 cri.go:89] found id: ""
	I1212 00:25:33.088630   54101 logs.go:282] 0 containers: []
	W1212 00:25:33.088637   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:33.088645   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:33.088655   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:33.144882   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:33.144899   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:33.156391   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:33.156407   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:33.220404   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:33.211436   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:33.212144   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:33.214079   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:33.214925   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:33.216614   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:33.211436   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:33.212144   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:33.214079   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:33.214925   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:33.216614   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:33.220413   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:33.220424   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:33.291732   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:33.291751   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:35.829535   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:35.839412   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:35.839479   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:35.864609   54101 cri.go:89] found id: ""
	I1212 00:25:35.864629   54101 logs.go:282] 0 containers: []
	W1212 00:25:35.864639   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:35.864644   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:35.864705   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:35.888220   54101 cri.go:89] found id: ""
	I1212 00:25:35.888234   54101 logs.go:282] 0 containers: []
	W1212 00:25:35.888241   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:35.888245   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:35.888304   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:35.911726   54101 cri.go:89] found id: ""
	I1212 00:25:35.911739   54101 logs.go:282] 0 containers: []
	W1212 00:25:35.911746   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:35.911751   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:35.911812   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:35.937495   54101 cri.go:89] found id: ""
	I1212 00:25:35.937510   54101 logs.go:282] 0 containers: []
	W1212 00:25:35.937517   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:35.937522   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:35.937578   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:35.962276   54101 cri.go:89] found id: ""
	I1212 00:25:35.962290   54101 logs.go:282] 0 containers: []
	W1212 00:25:35.962296   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:35.962301   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:35.962360   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:35.985962   54101 cri.go:89] found id: ""
	I1212 00:25:35.985981   54101 logs.go:282] 0 containers: []
	W1212 00:25:35.985989   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:35.985994   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:35.986056   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:36.012853   54101 cri.go:89] found id: ""
	I1212 00:25:36.012867   54101 logs.go:282] 0 containers: []
	W1212 00:25:36.012875   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:36.012882   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:36.012895   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:36.069296   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:36.069315   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:36.080983   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:36.081000   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:36.149041   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:36.139864   17098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:36.140552   17098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:36.142366   17098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:36.143049   17098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:36.144891   17098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:36.139864   17098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:36.140552   17098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:36.142366   17098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:36.143049   17098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:36.144891   17098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:36.149053   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:36.149064   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:36.210509   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:36.210528   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:38.743061   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:38.752984   54101 kubeadm.go:602] duration metric: took 4m3.726857079s to restartPrimaryControlPlane
	W1212 00:25:38.753047   54101 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 00:25:38.753120   54101 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1212 00:25:39.158817   54101 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:25:39.172695   54101 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:25:39.181725   54101 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:25:39.181785   54101 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:25:39.189823   54101 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:25:39.189833   54101 kubeadm.go:158] found existing configuration files:
	
	I1212 00:25:39.189882   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 00:25:39.197507   54101 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 00:25:39.197568   54101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 00:25:39.206290   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 00:25:39.215918   54101 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 00:25:39.215979   54101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:25:39.224009   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 00:25:39.231677   54101 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 00:25:39.231744   54101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:25:39.239027   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 00:25:39.246759   54101 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 00:25:39.246820   54101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:25:39.254322   54101 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:25:39.294892   54101 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 00:25:39.294976   54101 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 00:25:39.369123   54101 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:25:39.369186   54101 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 00:25:39.369220   54101 kubeadm.go:319] OS: Linux
	I1212 00:25:39.369264   54101 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 00:25:39.369311   54101 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 00:25:39.369356   54101 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 00:25:39.369403   54101 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 00:25:39.369450   54101 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 00:25:39.369496   54101 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 00:25:39.369541   54101 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 00:25:39.369587   54101 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 00:25:39.369632   54101 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 00:25:39.438649   54101 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:25:39.438759   54101 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:25:39.438849   54101 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 00:25:39.447406   54101 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:25:39.452683   54101 out.go:252]   - Generating certificates and keys ...
	I1212 00:25:39.452767   54101 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 00:25:39.452831   54101 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 00:25:39.452906   54101 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 00:25:39.452965   54101 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 00:25:39.453033   54101 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 00:25:39.453085   54101 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 00:25:39.453148   54101 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 00:25:39.453208   54101 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 00:25:39.453281   54101 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 00:25:39.453353   54101 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 00:25:39.453389   54101 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 00:25:39.453445   54101 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:25:39.710711   54101 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:25:40.209307   54101 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 00:25:40.334299   54101 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:25:40.657582   54101 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:25:40.893171   54101 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:25:40.893926   54101 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:25:40.896489   54101 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:25:40.899767   54101 out.go:252]   - Booting up control plane ...
	I1212 00:25:40.899871   54101 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:25:40.899953   54101 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:25:40.900236   54101 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:25:40.921621   54101 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:25:40.921722   54101 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 00:25:40.928629   54101 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 00:25:40.928898   54101 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:25:40.928939   54101 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 00:25:41.061713   54101 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 00:25:41.061825   54101 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 00:29:41.062316   54101 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001026811s
	I1212 00:29:41.062606   54101 kubeadm.go:319] 
	I1212 00:29:41.062683   54101 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 00:29:41.062716   54101 kubeadm.go:319] 	- The kubelet is not running
	I1212 00:29:41.062821   54101 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 00:29:41.062826   54101 kubeadm.go:319] 
	I1212 00:29:41.062929   54101 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 00:29:41.062960   54101 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 00:29:41.063008   54101 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 00:29:41.063012   54101 kubeadm.go:319] 
	I1212 00:29:41.067208   54101 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 00:29:41.067622   54101 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 00:29:41.067731   54101 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:29:41.067994   54101 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 00:29:41.067998   54101 kubeadm.go:319] 
	I1212 00:29:41.068065   54101 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1212 00:29:41.068164   54101 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001026811s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 00:29:41.068252   54101 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1212 00:29:41.482759   54101 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:29:41.496287   54101 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:29:41.496351   54101 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:29:41.504378   54101 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:29:41.504387   54101 kubeadm.go:158] found existing configuration files:
	
	I1212 00:29:41.504442   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 00:29:41.512585   54101 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 00:29:41.512640   54101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 00:29:41.520530   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 00:29:41.528262   54101 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 00:29:41.528318   54101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:29:41.536111   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 00:29:41.543998   54101 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 00:29:41.544056   54101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:29:41.551686   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 00:29:41.559774   54101 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 00:29:41.559831   54101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:29:41.567115   54101 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:29:41.604105   54101 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 00:29:41.604156   54101 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 00:29:41.681810   54101 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:29:41.681880   54101 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 00:29:41.681919   54101 kubeadm.go:319] OS: Linux
	I1212 00:29:41.681969   54101 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 00:29:41.682023   54101 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 00:29:41.682069   54101 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 00:29:41.682134   54101 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 00:29:41.682195   54101 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 00:29:41.682256   54101 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 00:29:41.682310   54101 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 00:29:41.682358   54101 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 00:29:41.682410   54101 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 00:29:41.751743   54101 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:29:41.751870   54101 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:29:41.751978   54101 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 00:29:41.757399   54101 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:29:41.762811   54101 out.go:252]   - Generating certificates and keys ...
	I1212 00:29:41.762902   54101 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 00:29:41.762969   54101 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 00:29:41.763059   54101 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 00:29:41.763119   54101 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 00:29:41.763187   54101 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 00:29:41.763239   54101 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 00:29:41.763301   54101 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 00:29:41.763361   54101 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 00:29:41.763434   54101 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 00:29:41.763505   54101 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 00:29:41.763542   54101 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 00:29:41.763596   54101 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:29:42.025181   54101 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:29:42.229266   54101 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 00:29:42.409579   54101 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:29:42.479383   54101 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:29:43.146782   54101 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:29:43.147428   54101 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:29:43.150122   54101 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:29:43.153470   54101 out.go:252]   - Booting up control plane ...
	I1212 00:29:43.153571   54101 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:29:43.153647   54101 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:29:43.153712   54101 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:29:43.174954   54101 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:29:43.175084   54101 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 00:29:43.182722   54101 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 00:29:43.183334   54101 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:29:43.183511   54101 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 00:29:43.327482   54101 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 00:29:43.327594   54101 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 00:33:43.326577   54101 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001134626s
	I1212 00:33:43.326601   54101 kubeadm.go:319] 
	I1212 00:33:43.326657   54101 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 00:33:43.326688   54101 kubeadm.go:319] 	- The kubelet is not running
	I1212 00:33:43.326791   54101 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 00:33:43.326796   54101 kubeadm.go:319] 
	I1212 00:33:43.326899   54101 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 00:33:43.326930   54101 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 00:33:43.326959   54101 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 00:33:43.326962   54101 kubeadm.go:319] 
	I1212 00:33:43.331146   54101 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 00:33:43.331567   54101 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 00:33:43.331673   54101 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:33:43.331909   54101 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 00:33:43.331913   54101 kubeadm.go:319] 
	I1212 00:33:43.331980   54101 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 00:33:43.332070   54101 kubeadm.go:403] duration metric: took 12m8.353678295s to StartCluster
	I1212 00:33:43.332098   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:33:43.332159   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:33:43.356905   54101 cri.go:89] found id: ""
	I1212 00:33:43.356919   54101 logs.go:282] 0 containers: []
	W1212 00:33:43.356925   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:33:43.356930   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:33:43.356985   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:33:43.381448   54101 cri.go:89] found id: ""
	I1212 00:33:43.381464   54101 logs.go:282] 0 containers: []
	W1212 00:33:43.381471   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:33:43.381477   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:33:43.381541   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:33:43.409467   54101 cri.go:89] found id: ""
	I1212 00:33:43.409480   54101 logs.go:282] 0 containers: []
	W1212 00:33:43.409487   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:33:43.409492   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:33:43.409550   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:33:43.434352   54101 cri.go:89] found id: ""
	I1212 00:33:43.434367   54101 logs.go:282] 0 containers: []
	W1212 00:33:43.434375   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:33:43.434381   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:33:43.434439   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:33:43.458566   54101 cri.go:89] found id: ""
	I1212 00:33:43.458581   54101 logs.go:282] 0 containers: []
	W1212 00:33:43.458588   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:33:43.458593   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:33:43.458661   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:33:43.482646   54101 cri.go:89] found id: ""
	I1212 00:33:43.482660   54101 logs.go:282] 0 containers: []
	W1212 00:33:43.482667   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:33:43.482672   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:33:43.482728   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:33:43.507433   54101 cri.go:89] found id: ""
	I1212 00:33:43.507445   54101 logs.go:282] 0 containers: []
	W1212 00:33:43.507452   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:33:43.507461   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:33:43.507472   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:33:43.575281   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:33:43.567196   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:33:43.568177   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:33:43.569762   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:33:43.570292   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:33:43.571460   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:33:43.567196   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:33:43.568177   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:33:43.569762   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:33:43.570292   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:33:43.571460   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:33:43.575296   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:33:43.575305   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:33:43.637567   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:33:43.637585   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:33:43.665505   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:33:43.665520   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:33:43.723897   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:33:43.723913   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1212 00:33:43.734646   54101 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001134626s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 00:33:43.734686   54101 out.go:285] * 
	W1212 00:33:43.734800   54101 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001134626s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 00:33:43.734860   54101 out.go:285] * 
	W1212 00:33:43.737311   54101 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:33:43.743292   54101 out.go:203] 
	W1212 00:33:43.746156   54101 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001134626s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 00:33:43.746395   54101 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 00:33:43.746473   54101 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 00:33:43.751052   54101 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.272455867Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.272520269Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.272625542Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.272714248Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.272776665Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.272836596Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.272893384Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.272958435Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.273027211Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.273124763Z" level=info msg="Connect containerd service"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.273469529Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.274122072Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.287381427Z" level=info msg="Start subscribing containerd event"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.287554622Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.287703153Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.287625047Z" level=info msg="Start recovering state"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.327211013Z" level=info msg="Start event monitor"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.327399462Z" level=info msg="Start cni network conf syncer for default"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.327470929Z" level=info msg="Start streaming server"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.327536341Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.327597642Z" level=info msg="runtime interface starting up..."
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.327652682Z" level=info msg="starting plugins..."
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.327716215Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.327919745Z" level=info msg="containerd successfully booted in 0.080422s"
	Dec 12 00:21:33 functional-767012 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:35:47.093156   23050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:35:47.093996   23050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:35:47.096645   23050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:35:47.098287   23050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:35:47.098573   23050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec11 23:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014465] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.504479] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.038126] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.726220] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.947343] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 00:35:47 up  1:18,  0 user,  load average: 0.49, 0.26, 0.35
	Linux functional-767012 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 00:35:44 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:35:44 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 482.
	Dec 12 00:35:44 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:44 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:44 functional-767012 kubelet[22898]: E1212 00:35:44.853484   22898 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:35:44 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:35:44 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:35:45 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 483.
	Dec 12 00:35:45 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:45 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:45 functional-767012 kubelet[22932]: E1212 00:35:45.536160   22932 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:35:45 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:35:45 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:35:46 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 484.
	Dec 12 00:35:46 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:46 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:46 functional-767012 kubelet[22968]: E1212 00:35:46.210180   22968 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:35:46 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:35:46 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:35:47 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 485.
	Dec 12 00:35:47 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:47 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:47 functional-767012 kubelet[23055]: E1212 00:35:47.105892   23055 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:35:47 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:35:47 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-767012 -n functional-767012
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-767012 -n functional-767012: exit status 2 (334.13491ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-767012" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-767012 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-767012 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (55.729104ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-767012 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-767012 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-767012 describe po hello-node-connect: exit status 1 (65.04837ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-767012 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-767012 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-767012 logs -l app=hello-node-connect: exit status 1 (77.3905ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-767012 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-767012 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-767012 describe svc hello-node-connect: exit status 1 (62.532419ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-767012 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-767012
helpers_test.go:244: (dbg) docker inspect functional-767012:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e",
	        "Created": "2025-12-12T00:06:52.261765556Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42951,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T00:06:52.317917194Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/hostname",
	        "HostsPath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/hosts",
	        "LogPath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e-json.log",
	        "Name": "/functional-767012",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-767012:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-767012",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e",
	                "LowerDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70-init/diff:/var/lib/docker/overlay2/4c0e5370e4fd7b4e6c6a79620ef377d7d55826709cd277e0cfa49c6005af0314/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-767012",
	                "Source": "/var/lib/docker/volumes/functional-767012/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-767012",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-767012",
	                "name.minikube.sigs.k8s.io": "functional-767012",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e781257da3adf1d3284ab2a6de0168c3db7957f25a7e53d0015250294193762d",
	            "SandboxKey": "/var/run/docker/netns/e781257da3ad",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-767012": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:4d:78:ba:7d:83",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "83467cc4cb13818b98ec0d7cb5fc0064ea6eb2c8db4256a8a81330921aa2d9a4",
	                    "EndpointID": "b787b732d8d748776ceeb6e65fab51cc1e79758446bc85ac20043b35593fab12",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-767012",
	                        "6585a82fe5e6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-767012 -n functional-767012
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-767012 -n functional-767012: exit status 2 (308.190678ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cache   │ functional-767012 cache reload                                                                                                                               │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ ssh     │ functional-767012 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                      │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │ 12 Dec 25 00:21 UTC │
	│ kubectl │ functional-767012 kubectl -- --context functional-767012 get pods                                                                                            │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │                     │
	│ start   │ -p functional-767012 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                     │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:21 UTC │                     │
	│ cp      │ functional-767012 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                           │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:33 UTC │
	│ config  │ functional-767012 config unset cpus                                                                                                                          │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:33 UTC │
	│ config  │ functional-767012 config get cpus                                                                                                                            │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │                     │
	│ config  │ functional-767012 config set cpus 2                                                                                                                          │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:33 UTC │
	│ config  │ functional-767012 config get cpus                                                                                                                            │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:33 UTC │
	│ config  │ functional-767012 config unset cpus                                                                                                                          │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:33 UTC │
	│ ssh     │ functional-767012 ssh -n functional-767012 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:33 UTC │
	│ config  │ functional-767012 config get cpus                                                                                                                            │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │                     │
	│ ssh     │ functional-767012 ssh echo hello                                                                                                                             │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:33 UTC │
	│ cp      │ functional-767012 cp functional-767012:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp1022785893/001/cp-test.txt │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:33 UTC │
	│ ssh     │ functional-767012 ssh cat /etc/hostname                                                                                                                      │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:33 UTC │
	│ ssh     │ functional-767012 ssh -n functional-767012 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:33 UTC │
	│ tunnel  │ functional-767012 tunnel --alsologtostderr                                                                                                                   │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │                     │
	│ tunnel  │ functional-767012 tunnel --alsologtostderr                                                                                                                   │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │                     │
	│ cp      │ functional-767012 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                    │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:33 UTC │
	│ tunnel  │ functional-767012 tunnel --alsologtostderr                                                                                                                   │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │                     │
	│ ssh     │ functional-767012 ssh -n functional-767012 sudo cat /tmp/does/not/exist/cp-test.txt                                                                          │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:33 UTC │ 12 Dec 25 00:33 UTC │
	│ addons  │ functional-767012 addons list                                                                                                                                │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ addons  │ functional-767012 addons list -o json                                                                                                                        │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:21:30
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:21:30.554245   54101 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:21:30.554345   54101 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:21:30.554348   54101 out.go:374] Setting ErrFile to fd 2...
	I1212 00:21:30.554353   54101 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:21:30.554677   54101 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	I1212 00:21:30.555164   54101 out.go:368] Setting JSON to false
	I1212 00:21:30.555965   54101 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3837,"bootTime":1765495054,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1212 00:21:30.556051   54101 start.go:143] virtualization:  
	I1212 00:21:30.559689   54101 out.go:179] * [functional-767012] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 00:21:30.562867   54101 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:21:30.562960   54101 notify.go:221] Checking for updates...
	I1212 00:21:30.566618   54101 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:21:30.569772   54101 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 00:21:30.572750   54101 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	I1212 00:21:30.576169   54101 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:21:30.579060   54101 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:21:30.582404   54101 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 00:21:30.582492   54101 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:21:30.621591   54101 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 00:21:30.621756   54101 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:21:30.683145   54101 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-12 00:21:30.674181767 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:21:30.683240   54101 docker.go:319] overlay module found
	I1212 00:21:30.688118   54101 out.go:179] * Using the docker driver based on existing profile
	I1212 00:21:30.690961   54101 start.go:309] selected driver: docker
	I1212 00:21:30.690971   54101 start.go:927] validating driver "docker" against &{Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:21:30.691125   54101 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:21:30.691237   54101 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:21:30.747846   54101 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-12 00:21:30.73816398 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:21:30.748230   54101 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:21:30.748252   54101 cni.go:84] Creating CNI manager for ""
	I1212 00:21:30.748298   54101 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 00:21:30.748340   54101 start.go:353] cluster config:
	{Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:21:30.751463   54101 out.go:179] * Starting "functional-767012" primary control-plane node in "functional-767012" cluster
	I1212 00:21:30.754231   54101 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1212 00:21:30.757160   54101 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 00:21:30.760119   54101 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 00:21:30.760160   54101 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1212 00:21:30.760168   54101 cache.go:65] Caching tarball of preloaded images
	I1212 00:21:30.760193   54101 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 00:21:30.760258   54101 preload.go:238] Found /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1212 00:21:30.760267   54101 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1212 00:21:30.760383   54101 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/config.json ...
	I1212 00:21:30.778906   54101 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 00:21:30.778917   54101 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 00:21:30.778938   54101 cache.go:243] Successfully downloaded all kic artifacts
	I1212 00:21:30.778968   54101 start.go:360] acquireMachinesLock for functional-767012: {Name:mk41cf89e93a3830367886ebbef2bb8f6e99e3f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:21:30.779070   54101 start.go:364] duration metric: took 80.115µs to acquireMachinesLock for "functional-767012"
	I1212 00:21:30.779088   54101 start.go:96] Skipping create...Using existing machine configuration
	I1212 00:21:30.779093   54101 fix.go:54] fixHost starting: 
	I1212 00:21:30.779346   54101 cli_runner.go:164] Run: docker container inspect functional-767012 --format={{.State.Status}}
	I1212 00:21:30.795901   54101 fix.go:112] recreateIfNeeded on functional-767012: state=Running err=<nil>
	W1212 00:21:30.795920   54101 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 00:21:30.799043   54101 out.go:252] * Updating the running docker "functional-767012" container ...
	I1212 00:21:30.799064   54101 machine.go:94] provisionDockerMachine start ...
	I1212 00:21:30.799139   54101 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:21:30.816214   54101 main.go:143] libmachine: Using SSH client type: native
	I1212 00:21:30.816539   54101 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 00:21:30.816545   54101 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 00:21:30.966929   54101 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-767012
	
	I1212 00:21:30.966943   54101 ubuntu.go:182] provisioning hostname "functional-767012"
	I1212 00:21:30.967026   54101 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:21:30.983921   54101 main.go:143] libmachine: Using SSH client type: native
	I1212 00:21:30.984212   54101 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 00:21:30.984220   54101 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-767012 && echo "functional-767012" | sudo tee /etc/hostname
	I1212 00:21:31.148238   54101 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-767012
	
	I1212 00:21:31.148339   54101 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:21:31.167090   54101 main.go:143] libmachine: Using SSH client type: native
	I1212 00:21:31.167393   54101 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1212 00:21:31.167407   54101 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-767012' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-767012/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-767012' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:21:31.315620   54101 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:21:31.315644   54101 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-2343/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-2343/.minikube}
	I1212 00:21:31.315665   54101 ubuntu.go:190] setting up certificates
	I1212 00:21:31.315680   54101 provision.go:84] configureAuth start
	I1212 00:21:31.315738   54101 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-767012
	I1212 00:21:31.348126   54101 provision.go:143] copyHostCerts
	I1212 00:21:31.348184   54101 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem, removing ...
	I1212 00:21:31.348191   54101 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem
	I1212 00:21:31.348265   54101 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem (1675 bytes)
	I1212 00:21:31.348353   54101 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem, removing ...
	I1212 00:21:31.348357   54101 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem
	I1212 00:21:31.348380   54101 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem (1082 bytes)
	I1212 00:21:31.348433   54101 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem, removing ...
	I1212 00:21:31.348436   54101 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem
	I1212 00:21:31.348457   54101 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem (1123 bytes)
	I1212 00:21:31.348500   54101 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem org=jenkins.functional-767012 san=[127.0.0.1 192.168.49.2 functional-767012 localhost minikube]
	I1212 00:21:31.571131   54101 provision.go:177] copyRemoteCerts
	I1212 00:21:31.571185   54101 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:21:31.571226   54101 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:21:31.588332   54101 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:21:31.690410   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 00:21:31.707240   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 00:21:31.724075   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:21:31.740524   54101 provision.go:87] duration metric: took 424.823605ms to configureAuth
	I1212 00:21:31.740541   54101 ubuntu.go:206] setting minikube options for container-runtime
	I1212 00:21:31.740761   54101 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 00:21:31.740771   54101 machine.go:97] duration metric: took 941.698571ms to provisionDockerMachine
	I1212 00:21:31.740778   54101 start.go:293] postStartSetup for "functional-767012" (driver="docker")
	I1212 00:21:31.740788   54101 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:21:31.740838   54101 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:21:31.740873   54101 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:21:31.758388   54101 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:21:31.866987   54101 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:21:31.870573   54101 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 00:21:31.870591   54101 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 00:21:31.870603   54101 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/addons for local assets ...
	I1212 00:21:31.870659   54101 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/files for local assets ...
	I1212 00:21:31.870732   54101 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem -> 42902.pem in /etc/ssl/certs
	I1212 00:21:31.870809   54101 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/test/nested/copy/4290/hosts -> hosts in /etc/test/nested/copy/4290
	I1212 00:21:31.870853   54101 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4290
	I1212 00:21:31.878221   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /etc/ssl/certs/42902.pem (1708 bytes)
	I1212 00:21:31.898601   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/test/nested/copy/4290/hosts --> /etc/test/nested/copy/4290/hosts (40 bytes)
	I1212 00:21:31.917863   54101 start.go:296] duration metric: took 177.070825ms for postStartSetup
	I1212 00:21:31.917948   54101 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:21:31.917994   54101 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:21:31.934865   54101 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:21:32.037797   54101 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 00:21:32.044535   54101 fix.go:56] duration metric: took 1.265435742s for fixHost
	I1212 00:21:32.044551   54101 start.go:83] releasing machines lock for "functional-767012", held for 1.265473363s
	I1212 00:21:32.044634   54101 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-767012
	I1212 00:21:32.063486   54101 ssh_runner.go:195] Run: cat /version.json
	I1212 00:21:32.063525   54101 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:21:32.063754   54101 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:21:32.063796   54101 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
	I1212 00:21:32.082463   54101 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:21:32.110313   54101 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
	I1212 00:21:32.198490   54101 ssh_runner.go:195] Run: systemctl --version
	I1212 00:21:32.295700   54101 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:21:32.300162   54101 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:21:32.300220   54101 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:21:32.308110   54101 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 00:21:32.308123   54101 start.go:496] detecting cgroup driver to use...
	I1212 00:21:32.308152   54101 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 00:21:32.308196   54101 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 00:21:32.324857   54101 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 00:21:32.337980   54101 docker.go:218] disabling cri-docker service (if available) ...
	I1212 00:21:32.338034   54101 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:21:32.353838   54101 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:21:32.367832   54101 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:21:32.501329   54101 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:21:32.628856   54101 docker.go:234] disabling docker service ...
	I1212 00:21:32.628933   54101 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:21:32.643664   54101 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:21:32.657070   54101 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:21:32.773509   54101 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:21:32.920829   54101 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:21:32.933710   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:21:32.947319   54101 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 00:21:32.956944   54101 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 00:21:32.966825   54101 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 00:21:32.966891   54101 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 00:21:32.976378   54101 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 00:21:32.985341   54101 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 00:21:32.995459   54101 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 00:21:33.011573   54101 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:21:33.020559   54101 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 00:21:33.029747   54101 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 00:21:33.038731   54101 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 00:21:33.048050   54101 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:21:33.056172   54101 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:21:33.063953   54101 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:21:33.190754   54101 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 00:21:33.330744   54101 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1212 00:21:33.330802   54101 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1212 00:21:33.334307   54101 start.go:564] Will wait 60s for crictl version
	I1212 00:21:33.334373   54101 ssh_runner.go:195] Run: which crictl
	I1212 00:21:33.337855   54101 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 00:21:33.361388   54101 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1212 00:21:33.361444   54101 ssh_runner.go:195] Run: containerd --version
	I1212 00:21:33.383087   54101 ssh_runner.go:195] Run: containerd --version
	I1212 00:21:33.409485   54101 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1212 00:21:33.412580   54101 cli_runner.go:164] Run: docker network inspect functional-767012 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 00:21:33.429552   54101 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 00:21:33.436766   54101 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1212 00:21:33.439631   54101 kubeadm.go:884] updating cluster {Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:21:33.439814   54101 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 00:21:33.439917   54101 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:21:33.465266   54101 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 00:21:33.465277   54101 containerd.go:534] Images already preloaded, skipping extraction
	I1212 00:21:33.465345   54101 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:21:33.495685   54101 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 00:21:33.495696   54101 cache_images.go:86] Images are preloaded, skipping loading
	I1212 00:21:33.495703   54101 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1212 00:21:33.495800   54101 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-767012 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:21:33.495863   54101 ssh_runner.go:195] Run: sudo crictl info
	I1212 00:21:33.520655   54101 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1212 00:21:33.520679   54101 cni.go:84] Creating CNI manager for ""
	I1212 00:21:33.520688   54101 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 00:21:33.520701   54101 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 00:21:33.520721   54101 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-767012 NodeName:functional-767012 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:21:33.520840   54101 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-767012"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:21:33.520909   54101 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 00:21:33.528771   54101 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 00:21:33.528832   54101 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:21:33.537845   54101 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1212 00:21:33.552578   54101 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 00:21:33.567275   54101 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1212 00:21:33.581608   54101 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 00:21:33.586017   54101 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:21:33.720787   54101 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:21:34.285938   54101 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012 for IP: 192.168.49.2
	I1212 00:21:34.285949   54101 certs.go:195] generating shared ca certs ...
	I1212 00:21:34.285964   54101 certs.go:227] acquiring lock for ca certs: {Name:mk18ed2fce74cbc4ee01c0f71e2dbdd98ccce1cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:21:34.286114   54101 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key
	I1212 00:21:34.286160   54101 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key
	I1212 00:21:34.286167   54101 certs.go:257] generating profile certs ...
	I1212 00:21:34.286262   54101 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.key
	I1212 00:21:34.286326   54101 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.key.fcbff5a4
	I1212 00:21:34.286371   54101 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.key
	I1212 00:21:34.286484   54101 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem (1338 bytes)
	W1212 00:21:34.286514   54101 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290_empty.pem, impossibly tiny 0 bytes
	I1212 00:21:34.286521   54101 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 00:21:34.286547   54101 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem (1082 bytes)
	I1212 00:21:34.286569   54101 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:21:34.286590   54101 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem (1675 bytes)
	I1212 00:21:34.286633   54101 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem (1708 bytes)
	I1212 00:21:34.287348   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:21:34.308553   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 00:21:34.331894   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:21:34.355464   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:21:34.374443   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 00:21:34.393434   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 00:21:34.411599   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:21:34.429619   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:21:34.447321   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /usr/share/ca-certificates/42902.pem (1708 bytes)
	I1212 00:21:34.464997   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:21:34.482627   54101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem --> /usr/share/ca-certificates/4290.pem (1338 bytes)
	I1212 00:21:34.500926   54101 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:21:34.513622   54101 ssh_runner.go:195] Run: openssl version
	I1212 00:21:34.519764   54101 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4290.pem
	I1212 00:21:34.527069   54101 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4290.pem /etc/ssl/certs/4290.pem
	I1212 00:21:34.534472   54101 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4290.pem
	I1212 00:21:34.538121   54101 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:06 /usr/share/ca-certificates/4290.pem
	I1212 00:21:34.538179   54101 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4290.pem
	I1212 00:21:34.579437   54101 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 00:21:34.586891   54101 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42902.pem
	I1212 00:21:34.594262   54101 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42902.pem /etc/ssl/certs/42902.pem
	I1212 00:21:34.601868   54101 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42902.pem
	I1212 00:21:34.605501   54101 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:06 /usr/share/ca-certificates/42902.pem
	I1212 00:21:34.605557   54101 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42902.pem
	I1212 00:21:34.646393   54101 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 00:21:34.653807   54101 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:21:34.661225   54101 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 00:21:34.668768   54101 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:21:34.672511   54101 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:21:34.672567   54101 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:21:34.713655   54101 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 00:21:34.721031   54101 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:21:34.724786   54101 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 00:21:34.765815   54101 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 00:21:34.806690   54101 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 00:21:34.847558   54101 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 00:21:34.888576   54101 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 00:21:34.933434   54101 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 00:21:34.978399   54101 kubeadm.go:401] StartCluster: {Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:21:34.978479   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1212 00:21:34.978543   54101 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:21:35.017576   54101 cri.go:89] found id: ""
	I1212 00:21:35.017638   54101 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:21:35.026096   54101 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 00:21:35.026118   54101 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 00:21:35.026171   54101 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 00:21:35.034785   54101 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:21:35.035314   54101 kubeconfig.go:125] found "functional-767012" server: "https://192.168.49.2:8441"
	I1212 00:21:35.036573   54101 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 00:21:35.046414   54101 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-12 00:07:00.613095536 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-12 00:21:33.576611675 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1212 00:21:35.046427   54101 kubeadm.go:1161] stopping kube-system containers ...
	I1212 00:21:35.046437   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1212 00:21:35.046492   54101 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:21:35.082797   54101 cri.go:89] found id: ""
	I1212 00:21:35.082857   54101 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 00:21:35.102877   54101 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:21:35.111403   54101 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 12 00:11 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 12 00:11 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 12 00:11 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 12 00:11 /etc/kubernetes/scheduler.conf
	
	I1212 00:21:35.111465   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 00:21:35.120302   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 00:21:35.128075   54101 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:21:35.128131   54101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:21:35.135780   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 00:21:35.143743   54101 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:21:35.143796   54101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:21:35.151555   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 00:21:35.159766   54101 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:21:35.159823   54101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:21:35.167617   54101 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:21:35.175675   54101 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:21:35.223997   54101 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:21:36.520500   54101 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.296478898s)
	I1212 00:21:36.520559   54101 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:21:36.729554   54101 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:21:36.788511   54101 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:21:36.835897   54101 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:21:36.835964   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:37.336817   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:37.836795   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:38.336842   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:38.836903   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:39.336145   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:39.836069   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:40.336948   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:40.837012   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:41.336101   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:41.836925   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:42.336725   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:42.836125   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:43.336921   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:43.836180   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:44.336837   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:44.836956   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:45.336777   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:45.836993   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:46.336836   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:46.836176   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:47.336095   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:47.836055   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:48.336741   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:48.836121   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:49.336917   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:49.836413   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:50.336092   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:50.836150   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:51.337033   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:51.836957   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:52.336084   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:52.836739   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:53.336118   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:53.836933   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:54.336879   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:54.836792   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:55.336817   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:55.836920   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:56.336115   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:56.836712   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:57.336349   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:57.836961   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:58.336641   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:58.836512   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:59.336849   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:21:59.836072   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:00.336133   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:00.836802   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:01.336983   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:01.836131   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:02.336195   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:02.836806   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:03.336993   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:03.837131   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:04.336098   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:04.836118   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:05.336315   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:05.837043   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:06.336091   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:06.836161   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:07.336123   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:07.836157   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:08.336214   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:08.836176   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:09.336152   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:09.836160   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:10.336024   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:10.836954   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:11.337041   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:11.836824   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:12.336075   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:12.836181   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:13.336397   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:13.836099   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:14.336156   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:14.836195   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:15.336313   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:15.836956   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:16.336943   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:16.836999   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:17.336149   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:17.836085   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:18.336339   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:18.836154   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:19.336945   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:19.836761   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:20.336721   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:20.837012   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:21.336764   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:21.836154   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:22.336203   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:22.836095   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:23.336255   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:23.836950   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:24.336879   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:24.836852   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:25.336786   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:25.836079   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:26.336767   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:26.836193   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:27.336157   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:27.836115   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:28.336182   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:28.836772   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:29.336188   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:29.836047   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:30.336792   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:30.836649   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:31.337030   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:31.836180   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:32.336198   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:32.837057   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:33.336991   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:33.836801   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:34.336920   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:34.836119   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:35.337050   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:35.836716   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:36.336423   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:36.836018   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:22:36.836096   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:22:36.862427   54101 cri.go:89] found id: ""
	I1212 00:22:36.862441   54101 logs.go:282] 0 containers: []
	W1212 00:22:36.862448   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:22:36.862453   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:22:36.862517   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:22:36.892149   54101 cri.go:89] found id: ""
	I1212 00:22:36.892163   54101 logs.go:282] 0 containers: []
	W1212 00:22:36.892169   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:22:36.892175   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:22:36.892234   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:22:36.916655   54101 cri.go:89] found id: ""
	I1212 00:22:36.916670   54101 logs.go:282] 0 containers: []
	W1212 00:22:36.916677   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:22:36.916681   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:22:36.916753   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:22:36.945533   54101 cri.go:89] found id: ""
	I1212 00:22:36.945546   54101 logs.go:282] 0 containers: []
	W1212 00:22:36.945554   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:22:36.945559   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:22:36.945616   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:22:36.970456   54101 cri.go:89] found id: ""
	I1212 00:22:36.970469   54101 logs.go:282] 0 containers: []
	W1212 00:22:36.970477   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:22:36.970482   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:22:36.970556   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:22:36.997550   54101 cri.go:89] found id: ""
	I1212 00:22:36.997568   54101 logs.go:282] 0 containers: []
	W1212 00:22:36.997577   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:22:36.997582   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:22:36.997656   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:22:37.043296   54101 cri.go:89] found id: ""
	I1212 00:22:37.043319   54101 logs.go:282] 0 containers: []
	W1212 00:22:37.043326   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:22:37.043334   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:22:37.043344   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:22:37.115314   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:22:37.115335   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:22:37.126489   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:22:37.126505   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:22:37.191880   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:22:37.183564   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:37.183995   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:37.185555   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:37.185892   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:37.187528   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:22:37.183564   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:37.183995   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:37.185555   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:37.185892   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:37.187528   10734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:22:37.191890   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:22:37.191900   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:22:37.253331   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:22:37.253349   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:22:39.783593   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:39.793972   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:22:39.794055   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:22:39.822155   54101 cri.go:89] found id: ""
	I1212 00:22:39.822169   54101 logs.go:282] 0 containers: []
	W1212 00:22:39.822176   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:22:39.822181   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:22:39.822250   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:22:39.847125   54101 cri.go:89] found id: ""
	I1212 00:22:39.847138   54101 logs.go:282] 0 containers: []
	W1212 00:22:39.847145   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:22:39.847150   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:22:39.847210   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:22:39.872050   54101 cri.go:89] found id: ""
	I1212 00:22:39.872064   54101 logs.go:282] 0 containers: []
	W1212 00:22:39.872072   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:22:39.872077   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:22:39.872143   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:22:39.896579   54101 cri.go:89] found id: ""
	I1212 00:22:39.896592   54101 logs.go:282] 0 containers: []
	W1212 00:22:39.896599   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:22:39.896606   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:22:39.896664   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:22:39.921505   54101 cri.go:89] found id: ""
	I1212 00:22:39.921520   54101 logs.go:282] 0 containers: []
	W1212 00:22:39.921537   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:22:39.921543   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:22:39.921602   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:22:39.949647   54101 cri.go:89] found id: ""
	I1212 00:22:39.949660   54101 logs.go:282] 0 containers: []
	W1212 00:22:39.949667   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:22:39.949672   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:22:39.949739   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:22:39.972863   54101 cri.go:89] found id: ""
	I1212 00:22:39.972877   54101 logs.go:282] 0 containers: []
	W1212 00:22:39.972886   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:22:39.972894   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:22:39.972904   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:22:39.983379   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:22:39.983394   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:22:40.083583   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:22:40.071923   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:40.073365   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:40.075724   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:40.076148   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:40.078746   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:22:40.071923   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:40.073365   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:40.075724   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:40.076148   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:40.078746   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:22:40.083593   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:22:40.083604   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:22:40.153645   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:22:40.153664   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:22:40.181452   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:22:40.181471   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:22:42.742128   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:42.752298   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:22:42.752357   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:22:42.777204   54101 cri.go:89] found id: ""
	I1212 00:22:42.777218   54101 logs.go:282] 0 containers: []
	W1212 00:22:42.777225   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:22:42.777236   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:22:42.777295   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:22:42.801649   54101 cri.go:89] found id: ""
	I1212 00:22:42.801663   54101 logs.go:282] 0 containers: []
	W1212 00:22:42.801670   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:22:42.801675   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:22:42.801731   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:22:42.826035   54101 cri.go:89] found id: ""
	I1212 00:22:42.826048   54101 logs.go:282] 0 containers: []
	W1212 00:22:42.826055   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:22:42.826059   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:22:42.826131   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:22:42.853290   54101 cri.go:89] found id: ""
	I1212 00:22:42.853303   54101 logs.go:282] 0 containers: []
	W1212 00:22:42.853310   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:22:42.853316   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:22:42.853372   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:22:42.880012   54101 cri.go:89] found id: ""
	I1212 00:22:42.880025   54101 logs.go:282] 0 containers: []
	W1212 00:22:42.880033   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:22:42.880037   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:22:42.880097   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:22:42.909253   54101 cri.go:89] found id: ""
	I1212 00:22:42.909267   54101 logs.go:282] 0 containers: []
	W1212 00:22:42.909274   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:22:42.909279   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:22:42.909335   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:22:42.936731   54101 cri.go:89] found id: ""
	I1212 00:22:42.936745   54101 logs.go:282] 0 containers: []
	W1212 00:22:42.936756   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:22:42.936764   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:22:42.936782   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:22:42.991768   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:22:42.991787   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:22:43.005267   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:22:43.005283   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:22:43.089221   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:22:43.080335   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:43.081099   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:43.082720   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:43.083301   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:43.084856   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:22:43.080335   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:43.081099   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:43.082720   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:43.083301   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:43.084856   10942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:22:43.089233   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:22:43.089244   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:22:43.153170   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:22:43.153191   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:22:45.684515   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:45.696038   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:22:45.696106   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:22:45.721408   54101 cri.go:89] found id: ""
	I1212 00:22:45.721422   54101 logs.go:282] 0 containers: []
	W1212 00:22:45.721439   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:22:45.721446   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:22:45.721518   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:22:45.746760   54101 cri.go:89] found id: ""
	I1212 00:22:45.746774   54101 logs.go:282] 0 containers: []
	W1212 00:22:45.746781   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:22:45.746794   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:22:45.746852   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:22:45.784086   54101 cri.go:89] found id: ""
	I1212 00:22:45.784100   54101 logs.go:282] 0 containers: []
	W1212 00:22:45.784107   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:22:45.784113   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:22:45.784196   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:22:45.809513   54101 cri.go:89] found id: ""
	I1212 00:22:45.809527   54101 logs.go:282] 0 containers: []
	W1212 00:22:45.809534   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:22:45.809547   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:22:45.809603   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:22:45.833922   54101 cri.go:89] found id: ""
	I1212 00:22:45.833935   54101 logs.go:282] 0 containers: []
	W1212 00:22:45.833943   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:22:45.833957   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:22:45.834020   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:22:45.858716   54101 cri.go:89] found id: ""
	I1212 00:22:45.858738   54101 logs.go:282] 0 containers: []
	W1212 00:22:45.858745   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:22:45.858751   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:22:45.858819   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:22:45.886125   54101 cri.go:89] found id: ""
	I1212 00:22:45.886140   54101 logs.go:282] 0 containers: []
	W1212 00:22:45.886161   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:22:45.886170   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:22:45.886181   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:22:45.913706   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:22:45.913723   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:22:45.972155   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:22:45.972173   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:22:45.982756   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:22:45.982771   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:22:46.057549   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:22:46.048888   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:46.049652   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:46.050838   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:46.051562   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:46.053189   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:22:46.048888   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:46.049652   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:46.050838   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:46.051562   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:46.053189   11058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:22:46.057568   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:22:46.057589   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:22:48.631952   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:48.641871   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:22:48.641945   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:22:48.667026   54101 cri.go:89] found id: ""
	I1212 00:22:48.667040   54101 logs.go:282] 0 containers: []
	W1212 00:22:48.667047   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:22:48.667052   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:22:48.667111   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:22:48.694393   54101 cri.go:89] found id: ""
	I1212 00:22:48.694407   54101 logs.go:282] 0 containers: []
	W1212 00:22:48.694414   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:22:48.694419   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:22:48.694479   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:22:48.723393   54101 cri.go:89] found id: ""
	I1212 00:22:48.723406   54101 logs.go:282] 0 containers: []
	W1212 00:22:48.723413   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:22:48.723418   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:22:48.723480   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:22:48.749414   54101 cri.go:89] found id: ""
	I1212 00:22:48.749427   54101 logs.go:282] 0 containers: []
	W1212 00:22:48.749434   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:22:48.749440   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:22:48.749500   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:22:48.773494   54101 cri.go:89] found id: ""
	I1212 00:22:48.773508   54101 logs.go:282] 0 containers: []
	W1212 00:22:48.773514   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:22:48.773520   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:22:48.773584   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:22:48.798476   54101 cri.go:89] found id: ""
	I1212 00:22:48.798490   54101 logs.go:282] 0 containers: []
	W1212 00:22:48.798497   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:22:48.798502   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:22:48.798570   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:22:48.823097   54101 cri.go:89] found id: ""
	I1212 00:22:48.823112   54101 logs.go:282] 0 containers: []
	W1212 00:22:48.823119   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:22:48.823127   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:22:48.823136   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:22:48.884369   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:22:48.884390   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:22:48.918017   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:22:48.918032   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:22:48.974636   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:22:48.974656   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:22:48.985524   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:22:48.985540   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:22:49.075379   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:22:49.063866   11166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:49.064550   11166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:49.067284   11166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:49.067979   11166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:49.070881   11166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:22:49.063866   11166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:49.064550   11166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:49.067284   11166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:49.067979   11166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:49.070881   11166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:22:51.575612   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:51.585822   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:22:51.585880   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:22:51.611290   54101 cri.go:89] found id: ""
	I1212 00:22:51.611304   54101 logs.go:282] 0 containers: []
	W1212 00:22:51.611311   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:22:51.611317   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:22:51.611376   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:22:51.638852   54101 cri.go:89] found id: ""
	I1212 00:22:51.638868   54101 logs.go:282] 0 containers: []
	W1212 00:22:51.638875   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:22:51.638882   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:22:51.638941   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:22:51.663831   54101 cri.go:89] found id: ""
	I1212 00:22:51.663845   54101 logs.go:282] 0 containers: []
	W1212 00:22:51.663852   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:22:51.663857   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:22:51.663914   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:22:51.689264   54101 cri.go:89] found id: ""
	I1212 00:22:51.689278   54101 logs.go:282] 0 containers: []
	W1212 00:22:51.689286   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:22:51.689291   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:22:51.689350   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:22:51.714774   54101 cri.go:89] found id: ""
	I1212 00:22:51.714788   54101 logs.go:282] 0 containers: []
	W1212 00:22:51.714795   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:22:51.714800   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:22:51.714889   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:22:51.739800   54101 cri.go:89] found id: ""
	I1212 00:22:51.739814   54101 logs.go:282] 0 containers: []
	W1212 00:22:51.739822   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:22:51.739827   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:22:51.739885   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:22:51.767107   54101 cri.go:89] found id: ""
	I1212 00:22:51.767134   54101 logs.go:282] 0 containers: []
	W1212 00:22:51.767142   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:22:51.767150   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:22:51.767160   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:22:51.821534   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:22:51.821552   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:22:51.832147   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:22:51.832161   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:22:51.897869   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:22:51.890100   11256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:51.890663   11256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:51.892157   11256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:51.892582   11256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:51.894067   11256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:22:51.890100   11256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:51.890663   11256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:51.892157   11256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:51.892582   11256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:51.894067   11256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:22:51.897889   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:22:51.897899   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:22:51.958502   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:22:51.958519   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:22:54.487348   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:54.497592   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:22:54.497655   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:22:54.524765   54101 cri.go:89] found id: ""
	I1212 00:22:54.524779   54101 logs.go:282] 0 containers: []
	W1212 00:22:54.524787   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:22:54.524800   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:22:54.524860   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:22:54.549685   54101 cri.go:89] found id: ""
	I1212 00:22:54.549699   54101 logs.go:282] 0 containers: []
	W1212 00:22:54.549706   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:22:54.549710   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:22:54.549766   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:22:54.573523   54101 cri.go:89] found id: ""
	I1212 00:22:54.573537   54101 logs.go:282] 0 containers: []
	W1212 00:22:54.573544   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:22:54.573549   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:22:54.573607   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:22:54.602326   54101 cri.go:89] found id: ""
	I1212 00:22:54.602342   54101 logs.go:282] 0 containers: []
	W1212 00:22:54.602349   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:22:54.602354   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:22:54.602411   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:22:54.626746   54101 cri.go:89] found id: ""
	I1212 00:22:54.626777   54101 logs.go:282] 0 containers: []
	W1212 00:22:54.626784   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:22:54.626792   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:22:54.626860   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:22:54.652678   54101 cri.go:89] found id: ""
	I1212 00:22:54.652693   54101 logs.go:282] 0 containers: []
	W1212 00:22:54.652715   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:22:54.652720   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:22:54.652789   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:22:54.677588   54101 cri.go:89] found id: ""
	I1212 00:22:54.677602   54101 logs.go:282] 0 containers: []
	W1212 00:22:54.677609   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:22:54.677617   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:22:54.677627   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:22:54.733727   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:22:54.733750   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:22:54.744434   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:22:54.744450   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:22:54.810290   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:22:54.802232   11361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:54.802635   11361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:54.804258   11361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:54.804924   11361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:54.806440   11361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:22:54.802232   11361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:54.802635   11361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:54.804258   11361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:54.804924   11361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:54.806440   11361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:22:54.810301   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:22:54.810311   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:22:54.869777   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:22:54.869794   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:22:57.396960   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:22:57.406761   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:22:57.406819   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:22:57.431202   54101 cri.go:89] found id: ""
	I1212 00:22:57.431216   54101 logs.go:282] 0 containers: []
	W1212 00:22:57.431223   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:22:57.431228   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:22:57.431285   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:22:57.456103   54101 cri.go:89] found id: ""
	I1212 00:22:57.456116   54101 logs.go:282] 0 containers: []
	W1212 00:22:57.456123   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:22:57.456129   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:22:57.456185   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:22:57.482677   54101 cri.go:89] found id: ""
	I1212 00:22:57.482690   54101 logs.go:282] 0 containers: []
	W1212 00:22:57.482697   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:22:57.482703   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:22:57.482776   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:22:57.507899   54101 cri.go:89] found id: ""
	I1212 00:22:57.507912   54101 logs.go:282] 0 containers: []
	W1212 00:22:57.507919   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:22:57.507925   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:22:57.507986   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:22:57.536079   54101 cri.go:89] found id: ""
	I1212 00:22:57.536093   54101 logs.go:282] 0 containers: []
	W1212 00:22:57.536101   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:22:57.536106   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:22:57.536167   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:22:57.564822   54101 cri.go:89] found id: ""
	I1212 00:22:57.564836   54101 logs.go:282] 0 containers: []
	W1212 00:22:57.564843   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:22:57.564857   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:22:57.564923   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:22:57.589921   54101 cri.go:89] found id: ""
	I1212 00:22:57.589935   54101 logs.go:282] 0 containers: []
	W1212 00:22:57.589943   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:22:57.589951   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:22:57.589961   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:22:57.648534   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:22:57.648552   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:22:57.659464   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:22:57.659481   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:22:57.727477   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:22:57.718925   11464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:57.719812   11464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:57.721551   11464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:57.722035   11464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:57.723542   11464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:22:57.718925   11464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:57.719812   11464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:57.721551   11464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:57.722035   11464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:22:57.723542   11464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:22:57.727497   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:22:57.727508   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:22:57.791545   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:22:57.791567   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:00.319474   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:00.337512   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:00.337596   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:00.386008   54101 cri.go:89] found id: ""
	I1212 00:23:00.386034   54101 logs.go:282] 0 containers: []
	W1212 00:23:00.386042   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:00.386048   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:00.386118   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:00.435933   54101 cri.go:89] found id: ""
	I1212 00:23:00.435948   54101 logs.go:282] 0 containers: []
	W1212 00:23:00.435961   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:00.435966   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:00.436033   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:00.464332   54101 cri.go:89] found id: ""
	I1212 00:23:00.464347   54101 logs.go:282] 0 containers: []
	W1212 00:23:00.464354   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:00.464360   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:00.464438   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:00.492272   54101 cri.go:89] found id: ""
	I1212 00:23:00.492288   54101 logs.go:282] 0 containers: []
	W1212 00:23:00.492296   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:00.492308   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:00.492399   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:00.523157   54101 cri.go:89] found id: ""
	I1212 00:23:00.523172   54101 logs.go:282] 0 containers: []
	W1212 00:23:00.523180   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:00.523185   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:00.523251   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:00.551205   54101 cri.go:89] found id: ""
	I1212 00:23:00.551219   54101 logs.go:282] 0 containers: []
	W1212 00:23:00.551227   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:00.551232   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:00.551303   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:00.581595   54101 cri.go:89] found id: ""
	I1212 00:23:00.581609   54101 logs.go:282] 0 containers: []
	W1212 00:23:00.581616   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:00.581624   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:00.581637   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:00.638838   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:00.638857   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:00.650126   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:00.650141   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:00.717921   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:00.707574   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:00.709178   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:00.709927   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:00.711724   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:00.712419   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:00.707574   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:00.709178   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:00.709927   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:00.711724   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:00.712419   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:00.717933   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:00.717947   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:00.780105   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:00.780123   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:03.311322   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:03.323283   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:03.323344   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:03.361266   54101 cri.go:89] found id: ""
	I1212 00:23:03.361281   54101 logs.go:282] 0 containers: []
	W1212 00:23:03.361288   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:03.361293   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:03.361353   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:03.386333   54101 cri.go:89] found id: ""
	I1212 00:23:03.386347   54101 logs.go:282] 0 containers: []
	W1212 00:23:03.386353   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:03.386363   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:03.386421   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:03.413227   54101 cri.go:89] found id: ""
	I1212 00:23:03.413241   54101 logs.go:282] 0 containers: []
	W1212 00:23:03.413248   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:03.413253   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:03.413310   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:03.437970   54101 cri.go:89] found id: ""
	I1212 00:23:03.437991   54101 logs.go:282] 0 containers: []
	W1212 00:23:03.437999   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:03.438004   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:03.438060   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:03.466477   54101 cri.go:89] found id: ""
	I1212 00:23:03.466491   54101 logs.go:282] 0 containers: []
	W1212 00:23:03.466499   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:03.466504   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:03.466561   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:03.491808   54101 cri.go:89] found id: ""
	I1212 00:23:03.491821   54101 logs.go:282] 0 containers: []
	W1212 00:23:03.491828   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:03.491834   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:03.491890   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:03.517149   54101 cri.go:89] found id: ""
	I1212 00:23:03.517163   54101 logs.go:282] 0 containers: []
	W1212 00:23:03.517170   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:03.517177   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:03.517187   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:03.572746   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:03.572773   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:03.584001   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:03.584018   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:03.656247   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:03.647626   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:03.648470   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:03.650161   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:03.650723   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:03.652396   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:03.647626   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:03.648470   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:03.650161   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:03.650723   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:03.652396   11671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:03.656257   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:03.656268   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:03.722945   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:03.722971   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:06.251078   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:06.261552   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:06.261613   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:06.289582   54101 cri.go:89] found id: ""
	I1212 00:23:06.289597   54101 logs.go:282] 0 containers: []
	W1212 00:23:06.289605   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:06.289610   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:06.289673   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:06.317842   54101 cri.go:89] found id: ""
	I1212 00:23:06.317855   54101 logs.go:282] 0 containers: []
	W1212 00:23:06.317863   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:06.317868   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:06.317926   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:06.352672   54101 cri.go:89] found id: ""
	I1212 00:23:06.352685   54101 logs.go:282] 0 containers: []
	W1212 00:23:06.352692   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:06.352697   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:06.352752   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:06.382465   54101 cri.go:89] found id: ""
	I1212 00:23:06.382479   54101 logs.go:282] 0 containers: []
	W1212 00:23:06.382486   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:06.382491   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:06.382549   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:06.409293   54101 cri.go:89] found id: ""
	I1212 00:23:06.409307   54101 logs.go:282] 0 containers: []
	W1212 00:23:06.409325   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:06.409351   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:06.409419   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:06.437827   54101 cri.go:89] found id: ""
	I1212 00:23:06.437842   54101 logs.go:282] 0 containers: []
	W1212 00:23:06.437850   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:06.437855   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:06.437916   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:06.461631   54101 cri.go:89] found id: ""
	I1212 00:23:06.461645   54101 logs.go:282] 0 containers: []
	W1212 00:23:06.461652   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:06.461660   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:06.461672   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:06.524818   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:06.524837   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:06.555647   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:06.555663   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:06.613018   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:06.613037   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:06.623988   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:06.624004   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:06.689835   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:06.681072   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:06.681903   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:06.683626   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:06.684195   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:06.685841   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:06.681072   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:06.681903   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:06.683626   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:06.684195   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:06.685841   11791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:09.190077   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:09.199951   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:09.200011   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:09.224598   54101 cri.go:89] found id: ""
	I1212 00:23:09.224612   54101 logs.go:282] 0 containers: []
	W1212 00:23:09.224619   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:09.224624   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:09.224680   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:09.249246   54101 cri.go:89] found id: ""
	I1212 00:23:09.249259   54101 logs.go:282] 0 containers: []
	W1212 00:23:09.249266   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:09.249270   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:09.249326   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:09.276466   54101 cri.go:89] found id: ""
	I1212 00:23:09.276481   54101 logs.go:282] 0 containers: []
	W1212 00:23:09.276488   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:09.276493   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:09.276569   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:09.305292   54101 cri.go:89] found id: ""
	I1212 00:23:09.305306   54101 logs.go:282] 0 containers: []
	W1212 00:23:09.305320   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:09.305325   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:09.305385   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:09.340249   54101 cri.go:89] found id: ""
	I1212 00:23:09.340263   54101 logs.go:282] 0 containers: []
	W1212 00:23:09.340269   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:09.340274   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:09.340335   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:09.371473   54101 cri.go:89] found id: ""
	I1212 00:23:09.371487   54101 logs.go:282] 0 containers: []
	W1212 00:23:09.371494   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:09.371499   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:09.371560   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:09.396595   54101 cri.go:89] found id: ""
	I1212 00:23:09.396611   54101 logs.go:282] 0 containers: []
	W1212 00:23:09.396618   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:09.396626   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:09.396639   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:09.455271   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:09.455288   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:09.465948   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:09.465963   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:09.533532   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:09.524698   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:09.525522   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:09.527378   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:09.527995   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:09.529577   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:09.524698   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:09.525522   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:09.527378   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:09.527995   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:09.529577   11884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:09.533544   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:09.533554   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:09.595751   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:09.595769   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:12.124276   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:12.134222   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:12.134281   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:12.158363   54101 cri.go:89] found id: ""
	I1212 00:23:12.158377   54101 logs.go:282] 0 containers: []
	W1212 00:23:12.158384   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:12.158390   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:12.158446   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:12.181913   54101 cri.go:89] found id: ""
	I1212 00:23:12.181930   54101 logs.go:282] 0 containers: []
	W1212 00:23:12.181936   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:12.181941   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:12.181997   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:12.206035   54101 cri.go:89] found id: ""
	I1212 00:23:12.206048   54101 logs.go:282] 0 containers: []
	W1212 00:23:12.206055   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:12.206060   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:12.206119   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:12.234593   54101 cri.go:89] found id: ""
	I1212 00:23:12.234606   54101 logs.go:282] 0 containers: []
	W1212 00:23:12.234614   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:12.234618   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:12.234675   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:12.258839   54101 cri.go:89] found id: ""
	I1212 00:23:12.258853   54101 logs.go:282] 0 containers: []
	W1212 00:23:12.258867   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:12.258873   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:12.258931   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:12.295188   54101 cri.go:89] found id: ""
	I1212 00:23:12.295202   54101 logs.go:282] 0 containers: []
	W1212 00:23:12.295219   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:12.295225   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:12.295295   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:12.331819   54101 cri.go:89] found id: ""
	I1212 00:23:12.331833   54101 logs.go:282] 0 containers: []
	W1212 00:23:12.331851   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:12.331859   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:12.331869   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:12.392019   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:12.392036   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:12.402367   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:12.402383   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:12.463715   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:12.455582   11990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:12.455962   11990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:12.457659   11990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:12.458359   11990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:12.459974   11990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:12.455582   11990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:12.455962   11990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:12.457659   11990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:12.458359   11990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:12.459974   11990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:12.463724   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:12.463745   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:12.528182   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:12.528200   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:15.057258   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:15.068358   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:15.068421   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:15.094774   54101 cri.go:89] found id: ""
	I1212 00:23:15.094787   54101 logs.go:282] 0 containers: []
	W1212 00:23:15.094804   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:15.094812   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:15.094882   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:15.120167   54101 cri.go:89] found id: ""
	I1212 00:23:15.120180   54101 logs.go:282] 0 containers: []
	W1212 00:23:15.120188   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:15.120193   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:15.120249   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:15.150855   54101 cri.go:89] found id: ""
	I1212 00:23:15.150868   54101 logs.go:282] 0 containers: []
	W1212 00:23:15.150886   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:15.150891   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:15.150958   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:15.179684   54101 cri.go:89] found id: ""
	I1212 00:23:15.179697   54101 logs.go:282] 0 containers: []
	W1212 00:23:15.179704   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:15.179709   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:15.179784   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:15.204315   54101 cri.go:89] found id: ""
	I1212 00:23:15.204338   54101 logs.go:282] 0 containers: []
	W1212 00:23:15.204345   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:15.204350   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:15.204425   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:15.229074   54101 cri.go:89] found id: ""
	I1212 00:23:15.229088   54101 logs.go:282] 0 containers: []
	W1212 00:23:15.229095   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:15.229103   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:15.229168   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:15.253510   54101 cri.go:89] found id: ""
	I1212 00:23:15.253532   54101 logs.go:282] 0 containers: []
	W1212 00:23:15.253540   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:15.253548   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:15.253559   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:15.264299   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:15.264317   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:15.346071   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:15.332347   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:15.334627   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:15.335427   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:15.337189   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:15.337763   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:15.332347   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:15.334627   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:15.335427   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:15.337189   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:15.337763   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:15.346082   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:15.346092   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:15.414287   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:15.414306   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:15.440115   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:15.440130   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:17.999409   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:18.010537   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:18.010603   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:18.036961   54101 cri.go:89] found id: ""
	I1212 00:23:18.036975   54101 logs.go:282] 0 containers: []
	W1212 00:23:18.036982   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:18.036988   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:18.037047   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:18.065553   54101 cri.go:89] found id: ""
	I1212 00:23:18.065568   54101 logs.go:282] 0 containers: []
	W1212 00:23:18.065575   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:18.065582   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:18.065643   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:18.090902   54101 cri.go:89] found id: ""
	I1212 00:23:18.090916   54101 logs.go:282] 0 containers: []
	W1212 00:23:18.090923   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:18.090927   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:18.090987   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:18.120598   54101 cri.go:89] found id: ""
	I1212 00:23:18.120611   54101 logs.go:282] 0 containers: []
	W1212 00:23:18.120618   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:18.120623   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:18.120686   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:18.147780   54101 cri.go:89] found id: ""
	I1212 00:23:18.147794   54101 logs.go:282] 0 containers: []
	W1212 00:23:18.147801   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:18.147806   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:18.147863   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:18.176272   54101 cri.go:89] found id: ""
	I1212 00:23:18.176286   54101 logs.go:282] 0 containers: []
	W1212 00:23:18.176293   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:18.176306   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:18.176368   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:18.201024   54101 cri.go:89] found id: ""
	I1212 00:23:18.201037   54101 logs.go:282] 0 containers: []
	W1212 00:23:18.201045   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:18.201052   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:18.201062   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:18.211552   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:18.211566   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:18.274135   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:18.266305   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:18.266699   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:18.268383   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:18.268854   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:18.270264   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:18.266305   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:18.266699   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:18.268383   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:18.268854   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:18.270264   12189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:18.274145   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:18.274155   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:18.339516   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:18.339534   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:18.369221   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:18.369236   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:20.928503   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:20.938705   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:20.938771   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:20.966429   54101 cri.go:89] found id: ""
	I1212 00:23:20.966442   54101 logs.go:282] 0 containers: []
	W1212 00:23:20.966449   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:20.966463   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:20.966521   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:20.993659   54101 cri.go:89] found id: ""
	I1212 00:23:20.993674   54101 logs.go:282] 0 containers: []
	W1212 00:23:20.993694   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:20.993700   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:20.993783   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:21.021877   54101 cri.go:89] found id: ""
	I1212 00:23:21.021894   54101 logs.go:282] 0 containers: []
	W1212 00:23:21.021901   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:21.021907   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:21.021974   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:21.050301   54101 cri.go:89] found id: ""
	I1212 00:23:21.050315   54101 logs.go:282] 0 containers: []
	W1212 00:23:21.050333   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:21.050338   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:21.050394   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:21.074369   54101 cri.go:89] found id: ""
	I1212 00:23:21.074382   54101 logs.go:282] 0 containers: []
	W1212 00:23:21.074399   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:21.074404   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:21.074459   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:21.100847   54101 cri.go:89] found id: ""
	I1212 00:23:21.100860   54101 logs.go:282] 0 containers: []
	W1212 00:23:21.100867   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:21.100872   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:21.100930   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:21.129915   54101 cri.go:89] found id: ""
	I1212 00:23:21.129928   54101 logs.go:282] 0 containers: []
	W1212 00:23:21.129950   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:21.129958   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:21.129967   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:21.186387   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:21.186407   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:21.197421   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:21.197437   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:21.261078   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:21.252661   12293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:21.253431   12293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:21.255174   12293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:21.255799   12293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:21.257304   12293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:21.252661   12293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:21.253431   12293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:21.255174   12293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:21.255799   12293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:21.257304   12293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:21.261090   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:21.261104   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:21.326885   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:21.326903   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:23.859105   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:23.869083   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:23.869143   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:23.892667   54101 cri.go:89] found id: ""
	I1212 00:23:23.892681   54101 logs.go:282] 0 containers: []
	W1212 00:23:23.892688   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:23.892693   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:23.892755   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:23.916368   54101 cri.go:89] found id: ""
	I1212 00:23:23.916381   54101 logs.go:282] 0 containers: []
	W1212 00:23:23.916388   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:23.916393   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:23.916456   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:23.953674   54101 cri.go:89] found id: ""
	I1212 00:23:23.953688   54101 logs.go:282] 0 containers: []
	W1212 00:23:23.953695   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:23.953700   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:23.953755   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:23.977280   54101 cri.go:89] found id: ""
	I1212 00:23:23.977293   54101 logs.go:282] 0 containers: []
	W1212 00:23:23.977300   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:23.977305   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:23.977364   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:24.002961   54101 cri.go:89] found id: ""
	I1212 00:23:24.002985   54101 logs.go:282] 0 containers: []
	W1212 00:23:24.003014   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:24.003020   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:24.003098   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:24.034368   54101 cri.go:89] found id: ""
	I1212 00:23:24.034382   54101 logs.go:282] 0 containers: []
	W1212 00:23:24.034393   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:24.034398   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:24.034470   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:24.065761   54101 cri.go:89] found id: ""
	I1212 00:23:24.065775   54101 logs.go:282] 0 containers: []
	W1212 00:23:24.065788   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:24.065796   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:24.065806   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:24.122870   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:24.122890   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:24.134384   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:24.134398   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:24.204008   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:24.196235   12399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:24.196812   12399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:24.198515   12399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:24.198869   12399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:24.200088   12399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:24.196235   12399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:24.196812   12399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:24.198515   12399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:24.198869   12399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:24.200088   12399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:24.204018   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:24.204029   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:24.268817   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:24.268835   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:26.805407   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:26.815561   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:26.815619   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:26.843361   54101 cri.go:89] found id: ""
	I1212 00:23:26.843375   54101 logs.go:282] 0 containers: []
	W1212 00:23:26.843382   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:26.843388   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:26.843447   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:26.867615   54101 cri.go:89] found id: ""
	I1212 00:23:26.867630   54101 logs.go:282] 0 containers: []
	W1212 00:23:26.867637   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:26.867642   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:26.867698   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:26.897089   54101 cri.go:89] found id: ""
	I1212 00:23:26.897102   54101 logs.go:282] 0 containers: []
	W1212 00:23:26.897109   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:26.897114   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:26.897173   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:26.920797   54101 cri.go:89] found id: ""
	I1212 00:23:26.920810   54101 logs.go:282] 0 containers: []
	W1212 00:23:26.920817   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:26.920822   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:26.920878   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:26.948949   54101 cri.go:89] found id: ""
	I1212 00:23:26.948963   54101 logs.go:282] 0 containers: []
	W1212 00:23:26.948970   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:26.948975   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:26.949034   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:26.972541   54101 cri.go:89] found id: ""
	I1212 00:23:26.972555   54101 logs.go:282] 0 containers: []
	W1212 00:23:26.972563   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:26.972568   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:26.972631   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:26.998049   54101 cri.go:89] found id: ""
	I1212 00:23:26.998065   54101 logs.go:282] 0 containers: []
	W1212 00:23:26.998073   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:26.998089   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:26.998102   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:27.027523   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:27.027538   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:27.085127   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:27.085146   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:27.096087   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:27.096101   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:27.162090   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:27.153308   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:27.154010   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:27.155943   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:27.156645   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:27.158348   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:27.153308   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:27.154010   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:27.155943   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:27.156645   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:27.158348   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:27.162101   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:27.162111   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:29.728366   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:29.738393   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:29.738452   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:29.764004   54101 cri.go:89] found id: ""
	I1212 00:23:29.764017   54101 logs.go:282] 0 containers: []
	W1212 00:23:29.764024   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:29.764029   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:29.764089   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:29.787843   54101 cri.go:89] found id: ""
	I1212 00:23:29.787857   54101 logs.go:282] 0 containers: []
	W1212 00:23:29.787874   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:29.787879   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:29.787936   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:29.812859   54101 cri.go:89] found id: ""
	I1212 00:23:29.812872   54101 logs.go:282] 0 containers: []
	W1212 00:23:29.812879   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:29.812884   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:29.812941   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:29.837580   54101 cri.go:89] found id: ""
	I1212 00:23:29.837593   54101 logs.go:282] 0 containers: []
	W1212 00:23:29.837600   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:29.837605   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:29.837673   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:29.861535   54101 cri.go:89] found id: ""
	I1212 00:23:29.861560   54101 logs.go:282] 0 containers: []
	W1212 00:23:29.861567   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:29.861572   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:29.861644   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:29.886533   54101 cri.go:89] found id: ""
	I1212 00:23:29.886546   54101 logs.go:282] 0 containers: []
	W1212 00:23:29.886553   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:29.886559   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:29.886624   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:29.913577   54101 cri.go:89] found id: ""
	I1212 00:23:29.913604   54101 logs.go:282] 0 containers: []
	W1212 00:23:29.913611   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:29.913619   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:29.913630   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:29.940660   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:29.940675   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:29.995286   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:29.995307   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:30.029235   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:30.029252   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:30.103143   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:30.093717   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:30.094664   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:30.096287   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:30.096764   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:30.098381   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:30.093717   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:30.094664   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:30.096287   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:30.096764   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:30.098381   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:30.103157   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:30.103168   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:32.666081   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:32.676000   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:32.676071   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:32.701112   54101 cri.go:89] found id: ""
	I1212 00:23:32.701125   54101 logs.go:282] 0 containers: []
	W1212 00:23:32.701133   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:32.701138   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:32.701195   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:32.727727   54101 cri.go:89] found id: ""
	I1212 00:23:32.727741   54101 logs.go:282] 0 containers: []
	W1212 00:23:32.727748   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:32.727753   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:32.727810   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:32.756561   54101 cri.go:89] found id: ""
	I1212 00:23:32.756574   54101 logs.go:282] 0 containers: []
	W1212 00:23:32.756581   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:32.756586   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:32.756648   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:32.781745   54101 cri.go:89] found id: ""
	I1212 00:23:32.781758   54101 logs.go:282] 0 containers: []
	W1212 00:23:32.781765   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:32.781771   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:32.781830   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:32.807544   54101 cri.go:89] found id: ""
	I1212 00:23:32.807558   54101 logs.go:282] 0 containers: []
	W1212 00:23:32.807571   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:32.807576   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:32.807634   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:32.837232   54101 cri.go:89] found id: ""
	I1212 00:23:32.837246   54101 logs.go:282] 0 containers: []
	W1212 00:23:32.837253   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:32.837259   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:32.837321   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:32.864631   54101 cri.go:89] found id: ""
	I1212 00:23:32.864645   54101 logs.go:282] 0 containers: []
	W1212 00:23:32.864660   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:32.864667   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:32.864678   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:32.927240   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:32.919009   12700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:32.919629   12700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:32.921337   12700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:32.921842   12700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:32.923382   12700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:32.919009   12700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:32.919629   12700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:32.921337   12700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:32.921842   12700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:32.923382   12700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:32.927249   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:32.927276   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:32.990198   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:32.990226   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:33.020370   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:33.020389   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:33.077339   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:33.077359   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:35.589167   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:35.599047   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:35.599105   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:35.624300   54101 cri.go:89] found id: ""
	I1212 00:23:35.624315   54101 logs.go:282] 0 containers: []
	W1212 00:23:35.624322   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:35.624327   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:35.624387   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:35.647815   54101 cri.go:89] found id: ""
	I1212 00:23:35.647829   54101 logs.go:282] 0 containers: []
	W1212 00:23:35.647837   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:35.647842   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:35.647900   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:35.676530   54101 cri.go:89] found id: ""
	I1212 00:23:35.676544   54101 logs.go:282] 0 containers: []
	W1212 00:23:35.676551   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:35.676556   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:35.676617   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:35.705816   54101 cri.go:89] found id: ""
	I1212 00:23:35.705831   54101 logs.go:282] 0 containers: []
	W1212 00:23:35.705838   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:35.705844   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:35.705903   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:35.733393   54101 cri.go:89] found id: ""
	I1212 00:23:35.733413   54101 logs.go:282] 0 containers: []
	W1212 00:23:35.733421   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:35.733426   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:35.733485   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:35.757717   54101 cri.go:89] found id: ""
	I1212 00:23:35.757731   54101 logs.go:282] 0 containers: []
	W1212 00:23:35.757738   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:35.757743   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:35.757800   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:35.782446   54101 cri.go:89] found id: ""
	I1212 00:23:35.782459   54101 logs.go:282] 0 containers: []
	W1212 00:23:35.782478   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:35.782487   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:35.782497   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:35.839811   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:35.839828   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:35.850443   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:35.850458   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:35.918359   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:35.910728   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:35.911186   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:35.912701   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:35.913021   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:35.914471   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:35.910728   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:35.911186   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:35.912701   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:35.913021   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:35.914471   12813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:35.918370   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:35.918382   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:35.980124   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:35.980143   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:38.530800   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:38.542531   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:38.542599   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:38.568754   54101 cri.go:89] found id: ""
	I1212 00:23:38.568767   54101 logs.go:282] 0 containers: []
	W1212 00:23:38.568774   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:38.568788   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:38.568846   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:38.598747   54101 cri.go:89] found id: ""
	I1212 00:23:38.598759   54101 logs.go:282] 0 containers: []
	W1212 00:23:38.598766   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:38.598771   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:38.598838   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:38.623489   54101 cri.go:89] found id: ""
	I1212 00:23:38.623503   54101 logs.go:282] 0 containers: []
	W1212 00:23:38.623519   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:38.623525   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:38.623594   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:38.648000   54101 cri.go:89] found id: ""
	I1212 00:23:38.648013   54101 logs.go:282] 0 containers: []
	W1212 00:23:38.648022   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:38.648027   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:38.648084   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:38.674721   54101 cri.go:89] found id: ""
	I1212 00:23:38.674734   54101 logs.go:282] 0 containers: []
	W1212 00:23:38.674741   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:38.674746   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:38.674808   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:38.700695   54101 cri.go:89] found id: ""
	I1212 00:23:38.700708   54101 logs.go:282] 0 containers: []
	W1212 00:23:38.700715   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:38.700720   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:38.700780   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:38.724873   54101 cri.go:89] found id: ""
	I1212 00:23:38.724886   54101 logs.go:282] 0 containers: []
	W1212 00:23:38.724892   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:38.724900   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:38.724910   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:38.751419   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:38.751434   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:38.807512   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:38.807530   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:38.818972   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:38.819002   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:38.889413   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:38.879843   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:38.881217   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:38.881803   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:38.883544   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:38.884066   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:38.879843   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:38.881217   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:38.881803   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:38.883544   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:38.884066   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:38.889425   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:38.889435   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:41.452716   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:41.462650   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:41.462718   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:41.487241   54101 cri.go:89] found id: ""
	I1212 00:23:41.487264   54101 logs.go:282] 0 containers: []
	W1212 00:23:41.487271   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:41.487277   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:41.487335   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:41.511441   54101 cri.go:89] found id: ""
	I1212 00:23:41.511454   54101 logs.go:282] 0 containers: []
	W1212 00:23:41.511461   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:41.511466   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:41.511523   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:41.560805   54101 cri.go:89] found id: ""
	I1212 00:23:41.560819   54101 logs.go:282] 0 containers: []
	W1212 00:23:41.560826   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:41.560831   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:41.560887   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:41.587388   54101 cri.go:89] found id: ""
	I1212 00:23:41.587402   54101 logs.go:282] 0 containers: []
	W1212 00:23:41.587408   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:41.587413   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:41.587469   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:41.611964   54101 cri.go:89] found id: ""
	I1212 00:23:41.611979   54101 logs.go:282] 0 containers: []
	W1212 00:23:41.611986   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:41.611991   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:41.612051   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:41.637582   54101 cri.go:89] found id: ""
	I1212 00:23:41.637595   54101 logs.go:282] 0 containers: []
	W1212 00:23:41.637601   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:41.637606   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:41.637662   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:41.660916   54101 cri.go:89] found id: ""
	I1212 00:23:41.660939   54101 logs.go:282] 0 containers: []
	W1212 00:23:41.660947   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:41.660955   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:41.660964   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:41.720148   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:41.720165   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:41.730670   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:41.730686   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:41.792978   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:41.784826   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:41.785364   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:41.786819   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:41.787322   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:41.788953   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:41.784826   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:41.785364   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:41.786819   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:41.787322   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:41.788953   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:41.792987   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:41.792997   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:41.853248   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:41.853264   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:44.384182   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:44.394508   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:44.394568   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:44.418597   54101 cri.go:89] found id: ""
	I1212 00:23:44.418612   54101 logs.go:282] 0 containers: []
	W1212 00:23:44.418619   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:44.418624   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:44.418681   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:44.443581   54101 cri.go:89] found id: ""
	I1212 00:23:44.443595   54101 logs.go:282] 0 containers: []
	W1212 00:23:44.443603   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:44.443608   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:44.443665   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:44.468881   54101 cri.go:89] found id: ""
	I1212 00:23:44.468895   54101 logs.go:282] 0 containers: []
	W1212 00:23:44.468902   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:44.468907   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:44.468965   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:44.493396   54101 cri.go:89] found id: ""
	I1212 00:23:44.493410   54101 logs.go:282] 0 containers: []
	W1212 00:23:44.493417   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:44.493422   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:44.493479   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:44.517484   54101 cri.go:89] found id: ""
	I1212 00:23:44.517498   54101 logs.go:282] 0 containers: []
	W1212 00:23:44.517505   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:44.517510   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:44.517570   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:44.550796   54101 cri.go:89] found id: ""
	I1212 00:23:44.550810   54101 logs.go:282] 0 containers: []
	W1212 00:23:44.550817   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:44.550822   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:44.550883   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:44.576925   54101 cri.go:89] found id: ""
	I1212 00:23:44.576938   54101 logs.go:282] 0 containers: []
	W1212 00:23:44.576946   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:44.576954   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:44.576964   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:44.589144   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:44.589160   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:44.657506   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:44.648963   13127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:44.649564   13127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:44.651341   13127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:44.651846   13127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:44.653593   13127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:44.648963   13127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:44.649564   13127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:44.651341   13127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:44.651846   13127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:44.653593   13127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:44.657515   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:44.657526   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:44.718495   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:44.718513   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:44.745494   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:44.745508   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:47.304216   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:47.314254   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:47.314318   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:47.339739   54101 cri.go:89] found id: ""
	I1212 00:23:47.339753   54101 logs.go:282] 0 containers: []
	W1212 00:23:47.339760   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:47.339766   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:47.339822   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:47.364136   54101 cri.go:89] found id: ""
	I1212 00:23:47.364150   54101 logs.go:282] 0 containers: []
	W1212 00:23:47.364157   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:47.364162   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:47.364226   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:47.387941   54101 cri.go:89] found id: ""
	I1212 00:23:47.387957   54101 logs.go:282] 0 containers: []
	W1212 00:23:47.387964   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:47.387969   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:47.388026   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:47.412100   54101 cri.go:89] found id: ""
	I1212 00:23:47.412114   54101 logs.go:282] 0 containers: []
	W1212 00:23:47.412121   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:47.412126   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:47.412187   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:47.437977   54101 cri.go:89] found id: ""
	I1212 00:23:47.437997   54101 logs.go:282] 0 containers: []
	W1212 00:23:47.438005   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:47.438011   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:47.438070   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:47.464751   54101 cri.go:89] found id: ""
	I1212 00:23:47.464765   54101 logs.go:282] 0 containers: []
	W1212 00:23:47.464772   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:47.464778   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:47.464834   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:47.492824   54101 cri.go:89] found id: ""
	I1212 00:23:47.492838   54101 logs.go:282] 0 containers: []
	W1212 00:23:47.492845   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:47.492853   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:47.492863   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:47.549187   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:47.549205   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:47.561345   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:47.561361   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:47.637229   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:47.628185   13238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:47.629565   13238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:47.630374   13238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:47.631980   13238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:47.632725   13238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:47.628185   13238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:47.629565   13238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:47.630374   13238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:47.631980   13238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:47.632725   13238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:47.637238   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:47.637249   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:47.700044   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:47.700063   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:50.232142   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:50.242326   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:50.242389   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:50.267337   54101 cri.go:89] found id: ""
	I1212 00:23:50.267351   54101 logs.go:282] 0 containers: []
	W1212 00:23:50.267359   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:50.267364   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:50.267424   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:50.294402   54101 cri.go:89] found id: ""
	I1212 00:23:50.294416   54101 logs.go:282] 0 containers: []
	W1212 00:23:50.294424   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:50.294428   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:50.294489   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:50.318907   54101 cri.go:89] found id: ""
	I1212 00:23:50.318921   54101 logs.go:282] 0 containers: []
	W1212 00:23:50.318928   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:50.318938   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:50.319041   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:50.344349   54101 cri.go:89] found id: ""
	I1212 00:23:50.344362   54101 logs.go:282] 0 containers: []
	W1212 00:23:50.344370   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:50.344375   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:50.344442   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:50.374529   54101 cri.go:89] found id: ""
	I1212 00:23:50.374543   54101 logs.go:282] 0 containers: []
	W1212 00:23:50.374550   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:50.374556   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:50.374612   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:50.400874   54101 cri.go:89] found id: ""
	I1212 00:23:50.400888   54101 logs.go:282] 0 containers: []
	W1212 00:23:50.400896   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:50.400903   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:50.400977   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:50.428510   54101 cri.go:89] found id: ""
	I1212 00:23:50.428525   54101 logs.go:282] 0 containers: []
	W1212 00:23:50.428533   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:50.428541   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:50.428553   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:50.455528   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:50.455545   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:50.510724   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:50.510743   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:50.521665   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:50.521681   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:50.611401   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:50.603277   13347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:50.603798   13347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:50.605445   13347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:50.605921   13347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:50.607608   13347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:50.603277   13347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:50.603798   13347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:50.605445   13347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:50.605921   13347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:50.607608   13347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:50.611411   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:50.611424   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:53.175490   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:53.185411   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:53.185474   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:53.209584   54101 cri.go:89] found id: ""
	I1212 00:23:53.209597   54101 logs.go:282] 0 containers: []
	W1212 00:23:53.209616   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:53.209628   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:53.209693   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:53.233686   54101 cri.go:89] found id: ""
	I1212 00:23:53.233700   54101 logs.go:282] 0 containers: []
	W1212 00:23:53.233707   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:53.233712   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:53.233774   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:53.257587   54101 cri.go:89] found id: ""
	I1212 00:23:53.257601   54101 logs.go:282] 0 containers: []
	W1212 00:23:53.257608   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:53.257613   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:53.257670   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:53.285867   54101 cri.go:89] found id: ""
	I1212 00:23:53.285880   54101 logs.go:282] 0 containers: []
	W1212 00:23:53.285887   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:53.285892   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:53.285947   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:53.312516   54101 cri.go:89] found id: ""
	I1212 00:23:53.312530   54101 logs.go:282] 0 containers: []
	W1212 00:23:53.312537   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:53.312541   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:53.312599   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:53.336425   54101 cri.go:89] found id: ""
	I1212 00:23:53.336445   54101 logs.go:282] 0 containers: []
	W1212 00:23:53.336452   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:53.336457   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:53.336514   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:53.360258   54101 cri.go:89] found id: ""
	I1212 00:23:53.360271   54101 logs.go:282] 0 containers: []
	W1212 00:23:53.360279   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:53.360287   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:53.360296   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:53.422643   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:53.422660   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:53.451682   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:53.451698   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:53.508302   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:53.508320   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:53.518839   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:53.518855   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:53.608163   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:53.599819   13452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:53.600615   13452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:53.602118   13452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:53.602666   13452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:53.604185   13452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:53.599819   13452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:53.600615   13452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:53.602118   13452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:53.602666   13452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:53.604185   13452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:56.109087   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:56.119165   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:56.119227   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:56.143243   54101 cri.go:89] found id: ""
	I1212 00:23:56.143256   54101 logs.go:282] 0 containers: []
	W1212 00:23:56.143263   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:56.143268   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:56.143326   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:56.168289   54101 cri.go:89] found id: ""
	I1212 00:23:56.168309   54101 logs.go:282] 0 containers: []
	W1212 00:23:56.168316   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:56.168321   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:56.168379   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:56.192149   54101 cri.go:89] found id: ""
	I1212 00:23:56.192163   54101 logs.go:282] 0 containers: []
	W1212 00:23:56.192172   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:56.192177   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:56.192238   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:56.216868   54101 cri.go:89] found id: ""
	I1212 00:23:56.216880   54101 logs.go:282] 0 containers: []
	W1212 00:23:56.216887   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:56.216892   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:56.216954   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:56.241928   54101 cri.go:89] found id: ""
	I1212 00:23:56.241941   54101 logs.go:282] 0 containers: []
	W1212 00:23:56.241951   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:56.241956   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:56.242011   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:56.265468   54101 cri.go:89] found id: ""
	I1212 00:23:56.265481   54101 logs.go:282] 0 containers: []
	W1212 00:23:56.265488   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:56.265493   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:56.265552   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:56.290530   54101 cri.go:89] found id: ""
	I1212 00:23:56.290544   54101 logs.go:282] 0 containers: []
	W1212 00:23:56.290551   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:56.290559   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:56.290569   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:56.345149   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:56.345167   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:56.355854   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:56.355869   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:56.418379   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:56.410553   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:56.411250   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:56.412854   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:56.413395   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:56.414621   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:56.410553   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:56.411250   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:56.412854   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:56.413395   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:56.414621   13543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:56.418389   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:56.418399   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:56.480524   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:56.480543   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:23:59.011832   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:23:59.022048   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:23:59.022108   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:23:59.046210   54101 cri.go:89] found id: ""
	I1212 00:23:59.046224   54101 logs.go:282] 0 containers: []
	W1212 00:23:59.046231   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:23:59.046236   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:23:59.046299   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:23:59.071192   54101 cri.go:89] found id: ""
	I1212 00:23:59.071206   54101 logs.go:282] 0 containers: []
	W1212 00:23:59.071213   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:23:59.071217   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:23:59.071278   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:23:59.095678   54101 cri.go:89] found id: ""
	I1212 00:23:59.095692   54101 logs.go:282] 0 containers: []
	W1212 00:23:59.095698   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:23:59.095703   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:23:59.095760   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:23:59.119812   54101 cri.go:89] found id: ""
	I1212 00:23:59.119825   54101 logs.go:282] 0 containers: []
	W1212 00:23:59.119832   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:23:59.119837   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:23:59.119897   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:23:59.143943   54101 cri.go:89] found id: ""
	I1212 00:23:59.143957   54101 logs.go:282] 0 containers: []
	W1212 00:23:59.143964   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:23:59.143969   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:23:59.144028   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:23:59.174483   54101 cri.go:89] found id: ""
	I1212 00:23:59.174506   54101 logs.go:282] 0 containers: []
	W1212 00:23:59.174513   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:23:59.174519   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:23:59.174576   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:23:59.202048   54101 cri.go:89] found id: ""
	I1212 00:23:59.202061   54101 logs.go:282] 0 containers: []
	W1212 00:23:59.202068   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:23:59.202076   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:23:59.202087   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:23:59.257143   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:23:59.257161   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:23:59.268235   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:23:59.268252   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:23:59.334149   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:23:59.326488   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:59.326882   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:59.328393   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:59.328789   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:59.330327   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:23:59.326488   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:59.326882   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:59.328393   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:59.328789   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:23:59.330327   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:23:59.334159   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:23:59.334184   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:23:59.396366   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:23:59.396383   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:01.926850   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:01.937253   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:01.937312   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:01.966272   54101 cri.go:89] found id: ""
	I1212 00:24:01.966286   54101 logs.go:282] 0 containers: []
	W1212 00:24:01.966293   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:01.966298   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:01.966359   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:01.991061   54101 cri.go:89] found id: ""
	I1212 00:24:01.991075   54101 logs.go:282] 0 containers: []
	W1212 00:24:01.991082   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:01.991087   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:01.991145   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:02.019646   54101 cri.go:89] found id: ""
	I1212 00:24:02.019661   54101 logs.go:282] 0 containers: []
	W1212 00:24:02.019668   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:02.019673   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:02.019731   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:02.044619   54101 cri.go:89] found id: ""
	I1212 00:24:02.044634   54101 logs.go:282] 0 containers: []
	W1212 00:24:02.044641   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:02.044648   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:02.044704   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:02.069486   54101 cri.go:89] found id: ""
	I1212 00:24:02.069500   54101 logs.go:282] 0 containers: []
	W1212 00:24:02.069508   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:02.069512   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:02.069569   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:02.096887   54101 cri.go:89] found id: ""
	I1212 00:24:02.096901   54101 logs.go:282] 0 containers: []
	W1212 00:24:02.096908   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:02.096913   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:02.096974   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:02.124826   54101 cri.go:89] found id: ""
	I1212 00:24:02.124839   54101 logs.go:282] 0 containers: []
	W1212 00:24:02.124847   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:02.124854   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:02.124864   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:02.152773   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:02.152789   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:02.210656   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:02.210676   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:02.222006   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:02.222022   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:02.293474   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:02.284427   13765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:02.285315   13765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:02.287050   13765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:02.287829   13765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:02.289483   13765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:02.284427   13765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:02.285315   13765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:02.287050   13765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:02.287829   13765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:02.289483   13765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:02.293484   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:02.293499   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:04.860582   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:04.870768   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:04.870829   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:04.896675   54101 cri.go:89] found id: ""
	I1212 00:24:04.896689   54101 logs.go:282] 0 containers: []
	W1212 00:24:04.896696   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:04.896701   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:04.896759   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:04.925636   54101 cri.go:89] found id: ""
	I1212 00:24:04.925651   54101 logs.go:282] 0 containers: []
	W1212 00:24:04.925658   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:04.925664   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:04.925730   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:04.950839   54101 cri.go:89] found id: ""
	I1212 00:24:04.950853   54101 logs.go:282] 0 containers: []
	W1212 00:24:04.950860   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:04.950865   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:04.950922   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:04.976777   54101 cri.go:89] found id: ""
	I1212 00:24:04.976792   54101 logs.go:282] 0 containers: []
	W1212 00:24:04.976799   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:04.976804   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:04.976862   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:05.007523   54101 cri.go:89] found id: ""
	I1212 00:24:05.007538   54101 logs.go:282] 0 containers: []
	W1212 00:24:05.007547   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:05.007552   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:05.007615   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:05.034390   54101 cri.go:89] found id: ""
	I1212 00:24:05.034412   54101 logs.go:282] 0 containers: []
	W1212 00:24:05.034419   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:05.034424   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:05.034492   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:05.060364   54101 cri.go:89] found id: ""
	I1212 00:24:05.060378   54101 logs.go:282] 0 containers: []
	W1212 00:24:05.060385   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:05.060394   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:05.060405   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:05.130824   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:05.122601   13853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:05.123172   13853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:05.124809   13853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:05.125287   13853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:05.126908   13853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:05.122601   13853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:05.123172   13853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:05.124809   13853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:05.125287   13853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:05.126908   13853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:05.130836   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:05.130846   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:05.193088   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:05.193106   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:05.221288   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:05.221305   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:05.280911   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:05.280928   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:07.791957   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:07.803197   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:07.803258   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:07.849866   54101 cri.go:89] found id: ""
	I1212 00:24:07.849879   54101 logs.go:282] 0 containers: []
	W1212 00:24:07.849885   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:07.849890   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:07.849944   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:07.879098   54101 cri.go:89] found id: ""
	I1212 00:24:07.879112   54101 logs.go:282] 0 containers: []
	W1212 00:24:07.879118   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:07.879123   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:07.879180   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:07.903042   54101 cri.go:89] found id: ""
	I1212 00:24:07.903056   54101 logs.go:282] 0 containers: []
	W1212 00:24:07.903063   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:07.903068   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:07.903124   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:07.926973   54101 cri.go:89] found id: ""
	I1212 00:24:07.926986   54101 logs.go:282] 0 containers: []
	W1212 00:24:07.927024   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:07.927029   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:07.927093   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:07.952849   54101 cri.go:89] found id: ""
	I1212 00:24:07.952863   54101 logs.go:282] 0 containers: []
	W1212 00:24:07.952870   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:07.952875   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:07.952937   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:07.976048   54101 cri.go:89] found id: ""
	I1212 00:24:07.976061   54101 logs.go:282] 0 containers: []
	W1212 00:24:07.976068   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:07.976073   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:07.976127   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:08.005144   54101 cri.go:89] found id: ""
	I1212 00:24:08.005157   54101 logs.go:282] 0 containers: []
	W1212 00:24:08.005165   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:08.005173   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:08.005183   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:08.062459   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:08.062477   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:08.073793   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:08.073821   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:08.140014   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:08.132203   13964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:08.132726   13964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:08.134246   13964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:08.134712   13964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:08.136200   13964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:08.132203   13964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:08.132726   13964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:08.134246   13964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:08.134712   13964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:08.136200   13964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:08.140025   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:08.140035   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:08.202051   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:08.202070   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:10.733798   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:10.743998   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:10.744057   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:10.768781   54101 cri.go:89] found id: ""
	I1212 00:24:10.768795   54101 logs.go:282] 0 containers: []
	W1212 00:24:10.768802   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:10.768807   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:10.768871   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:10.811478   54101 cri.go:89] found id: ""
	I1212 00:24:10.811492   54101 logs.go:282] 0 containers: []
	W1212 00:24:10.811499   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:10.811504   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:10.811570   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:10.842339   54101 cri.go:89] found id: ""
	I1212 00:24:10.842358   54101 logs.go:282] 0 containers: []
	W1212 00:24:10.842365   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:10.842370   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:10.842431   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:10.874129   54101 cri.go:89] found id: ""
	I1212 00:24:10.874143   54101 logs.go:282] 0 containers: []
	W1212 00:24:10.874151   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:10.874157   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:10.874217   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:10.898217   54101 cri.go:89] found id: ""
	I1212 00:24:10.898231   54101 logs.go:282] 0 containers: []
	W1212 00:24:10.898244   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:10.898249   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:10.898306   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:10.923360   54101 cri.go:89] found id: ""
	I1212 00:24:10.923374   54101 logs.go:282] 0 containers: []
	W1212 00:24:10.923380   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:10.923385   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:10.923442   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:10.947605   54101 cri.go:89] found id: ""
	I1212 00:24:10.947619   54101 logs.go:282] 0 containers: []
	W1212 00:24:10.947626   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:10.947634   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:10.947645   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:11.006969   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:11.006995   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:11.018264   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:11.018281   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:11.082660   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:11.073705   14067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:11.074224   14067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:11.075940   14067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:11.076685   14067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:11.078178   14067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:11.073705   14067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:11.074224   14067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:11.075940   14067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:11.076685   14067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:11.078178   14067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:11.082671   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:11.082681   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:11.144246   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:11.144263   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:13.671933   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:13.683185   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:13.683253   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:13.708906   54101 cri.go:89] found id: ""
	I1212 00:24:13.708920   54101 logs.go:282] 0 containers: []
	W1212 00:24:13.708927   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:13.708932   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:13.709070   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:13.733465   54101 cri.go:89] found id: ""
	I1212 00:24:13.733479   54101 logs.go:282] 0 containers: []
	W1212 00:24:13.733486   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:13.733491   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:13.733555   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:13.757055   54101 cri.go:89] found id: ""
	I1212 00:24:13.757069   54101 logs.go:282] 0 containers: []
	W1212 00:24:13.757076   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:13.757084   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:13.757142   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:13.781588   54101 cri.go:89] found id: ""
	I1212 00:24:13.781602   54101 logs.go:282] 0 containers: []
	W1212 00:24:13.781609   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:13.781614   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:13.781674   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:13.811312   54101 cri.go:89] found id: ""
	I1212 00:24:13.811325   54101 logs.go:282] 0 containers: []
	W1212 00:24:13.811333   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:13.811337   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:13.811394   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:13.844313   54101 cri.go:89] found id: ""
	I1212 00:24:13.844326   54101 logs.go:282] 0 containers: []
	W1212 00:24:13.844333   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:13.844338   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:13.844421   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:13.868420   54101 cri.go:89] found id: ""
	I1212 00:24:13.868434   54101 logs.go:282] 0 containers: []
	W1212 00:24:13.868441   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:13.868449   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:13.868459   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:13.923519   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:13.923536   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:13.934615   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:13.934631   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:14.000483   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:13.989816   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:13.990515   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:13.992025   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:13.992486   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:13.995350   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:13.989816   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:13.990515   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:13.992025   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:13.992486   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:13.995350   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:14.000493   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:14.000505   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:14.063145   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:14.063165   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:16.593154   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:16.603519   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:16.603584   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:16.632576   54101 cri.go:89] found id: ""
	I1212 00:24:16.632589   54101 logs.go:282] 0 containers: []
	W1212 00:24:16.632596   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:16.632603   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:16.632663   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:16.661504   54101 cri.go:89] found id: ""
	I1212 00:24:16.661518   54101 logs.go:282] 0 containers: []
	W1212 00:24:16.661525   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:16.661530   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:16.661587   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:16.686915   54101 cri.go:89] found id: ""
	I1212 00:24:16.686930   54101 logs.go:282] 0 containers: []
	W1212 00:24:16.686937   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:16.686942   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:16.687035   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:16.711579   54101 cri.go:89] found id: ""
	I1212 00:24:16.711594   54101 logs.go:282] 0 containers: []
	W1212 00:24:16.711601   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:16.711606   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:16.711664   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:16.735976   54101 cri.go:89] found id: ""
	I1212 00:24:16.735990   54101 logs.go:282] 0 containers: []
	W1212 00:24:16.735998   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:16.736003   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:16.736058   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:16.760337   54101 cri.go:89] found id: ""
	I1212 00:24:16.760351   54101 logs.go:282] 0 containers: []
	W1212 00:24:16.760359   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:16.760364   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:16.760429   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:16.787594   54101 cri.go:89] found id: ""
	I1212 00:24:16.787608   54101 logs.go:282] 0 containers: []
	W1212 00:24:16.787625   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:16.787634   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:16.787644   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:16.853787   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:16.853805   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:16.865402   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:16.865418   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:16.934251   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:16.925653   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:16.926416   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:16.928097   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:16.928745   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:16.930355   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:16.925653   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:16.926416   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:16.928097   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:16.928745   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:16.930355   14276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:16.934261   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:16.934272   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:16.995335   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:16.995360   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:19.530311   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:19.540648   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:19.540711   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:19.573854   54101 cri.go:89] found id: ""
	I1212 00:24:19.573868   54101 logs.go:282] 0 containers: []
	W1212 00:24:19.573875   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:19.573880   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:19.573938   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:19.598830   54101 cri.go:89] found id: ""
	I1212 00:24:19.598850   54101 logs.go:282] 0 containers: []
	W1212 00:24:19.598857   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:19.598862   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:19.598965   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:19.624335   54101 cri.go:89] found id: ""
	I1212 00:24:19.624349   54101 logs.go:282] 0 containers: []
	W1212 00:24:19.624357   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:19.624364   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:19.624451   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:19.650800   54101 cri.go:89] found id: ""
	I1212 00:24:19.650813   54101 logs.go:282] 0 containers: []
	W1212 00:24:19.650820   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:19.650826   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:19.650887   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:19.676025   54101 cri.go:89] found id: ""
	I1212 00:24:19.676038   54101 logs.go:282] 0 containers: []
	W1212 00:24:19.676046   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:19.676051   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:19.676111   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:19.702971   54101 cri.go:89] found id: ""
	I1212 00:24:19.702984   54101 logs.go:282] 0 containers: []
	W1212 00:24:19.703003   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:19.703008   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:19.703066   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:19.727517   54101 cri.go:89] found id: ""
	I1212 00:24:19.727530   54101 logs.go:282] 0 containers: []
	W1212 00:24:19.727537   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:19.727545   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:19.727558   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:19.784930   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:19.784948   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:19.799325   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:19.799340   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:19.872030   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:19.864278   14381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:19.865037   14381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:19.866546   14381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:19.866841   14381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:19.868283   14381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:19.864278   14381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:19.865037   14381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:19.866546   14381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:19.866841   14381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:19.868283   14381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:19.872041   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:19.872052   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:19.934549   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:19.934568   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:22.466009   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:22.476227   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:22.476288   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:22.501678   54101 cri.go:89] found id: ""
	I1212 00:24:22.501705   54101 logs.go:282] 0 containers: []
	W1212 00:24:22.501712   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:22.501717   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:22.501785   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:22.531238   54101 cri.go:89] found id: ""
	I1212 00:24:22.531251   54101 logs.go:282] 0 containers: []
	W1212 00:24:22.531258   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:22.531263   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:22.531321   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:22.554936   54101 cri.go:89] found id: ""
	I1212 00:24:22.554949   54101 logs.go:282] 0 containers: []
	W1212 00:24:22.554956   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:22.554962   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:22.555055   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:22.582980   54101 cri.go:89] found id: ""
	I1212 00:24:22.583017   54101 logs.go:282] 0 containers: []
	W1212 00:24:22.583025   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:22.583030   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:22.583094   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:22.608038   54101 cri.go:89] found id: ""
	I1212 00:24:22.608051   54101 logs.go:282] 0 containers: []
	W1212 00:24:22.608069   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:22.608074   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:22.608134   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:22.631929   54101 cri.go:89] found id: ""
	I1212 00:24:22.631942   54101 logs.go:282] 0 containers: []
	W1212 00:24:22.631959   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:22.631965   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:22.632035   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:22.660069   54101 cri.go:89] found id: ""
	I1212 00:24:22.660083   54101 logs.go:282] 0 containers: []
	W1212 00:24:22.660090   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:22.660107   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:22.660118   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:22.722675   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:22.714219   14476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:22.714970   14476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:22.716604   14476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:22.716888   14476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:22.718358   14476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:22.714219   14476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:22.714970   14476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:22.716604   14476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:22.716888   14476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:22.718358   14476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:22.722685   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:22.722695   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:22.783718   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:22.783736   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:22.815064   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:22.815082   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:22.876099   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:22.876117   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:25.389270   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:25.399208   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:25.399264   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:25.423023   54101 cri.go:89] found id: ""
	I1212 00:24:25.423036   54101 logs.go:282] 0 containers: []
	W1212 00:24:25.423043   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:25.423048   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:25.423110   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:25.447118   54101 cri.go:89] found id: ""
	I1212 00:24:25.447132   54101 logs.go:282] 0 containers: []
	W1212 00:24:25.447140   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:25.447145   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:25.447203   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:25.471506   54101 cri.go:89] found id: ""
	I1212 00:24:25.471520   54101 logs.go:282] 0 containers: []
	W1212 00:24:25.471527   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:25.471532   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:25.471588   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:25.496289   54101 cri.go:89] found id: ""
	I1212 00:24:25.496302   54101 logs.go:282] 0 containers: []
	W1212 00:24:25.496310   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:25.496315   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:25.496371   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:25.521055   54101 cri.go:89] found id: ""
	I1212 00:24:25.521068   54101 logs.go:282] 0 containers: []
	W1212 00:24:25.521075   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:25.521080   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:25.521136   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:25.545427   54101 cri.go:89] found id: ""
	I1212 00:24:25.545441   54101 logs.go:282] 0 containers: []
	W1212 00:24:25.545448   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:25.545453   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:25.545509   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:25.573059   54101 cri.go:89] found id: ""
	I1212 00:24:25.573073   54101 logs.go:282] 0 containers: []
	W1212 00:24:25.573080   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:25.573088   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:25.573098   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:25.627642   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:25.627661   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:25.638176   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:25.638192   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:25.702262   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:25.692958   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:25.693521   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:25.695662   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:25.696870   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:25.697262   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:25.692958   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:25.693521   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:25.695662   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:25.696870   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:25.697262   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:25.702271   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:25.702283   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:25.768032   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:25.768050   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:28.306236   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:28.316297   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:28.316366   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:28.339825   54101 cri.go:89] found id: ""
	I1212 00:24:28.339838   54101 logs.go:282] 0 containers: []
	W1212 00:24:28.339855   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:28.339860   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:28.339930   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:28.364813   54101 cri.go:89] found id: ""
	I1212 00:24:28.364826   54101 logs.go:282] 0 containers: []
	W1212 00:24:28.364832   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:28.364837   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:28.364902   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:28.398903   54101 cri.go:89] found id: ""
	I1212 00:24:28.398917   54101 logs.go:282] 0 containers: []
	W1212 00:24:28.398923   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:28.398928   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:28.398985   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:28.424563   54101 cri.go:89] found id: ""
	I1212 00:24:28.424577   54101 logs.go:282] 0 containers: []
	W1212 00:24:28.424584   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:28.424595   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:28.424652   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:28.448511   54101 cri.go:89] found id: ""
	I1212 00:24:28.448524   54101 logs.go:282] 0 containers: []
	W1212 00:24:28.448531   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:28.448536   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:28.448595   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:28.473282   54101 cri.go:89] found id: ""
	I1212 00:24:28.473295   54101 logs.go:282] 0 containers: []
	W1212 00:24:28.473303   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:28.473308   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:28.473364   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:28.496850   54101 cri.go:89] found id: ""
	I1212 00:24:28.496864   54101 logs.go:282] 0 containers: []
	W1212 00:24:28.496871   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:28.496879   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:28.496889   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:28.563054   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:28.554678   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:28.555432   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:28.557227   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:28.557770   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:28.559159   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:28.554678   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:28.555432   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:28.557227   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:28.557770   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:28.559159   14687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:28.563064   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:28.563076   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:28.625015   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:28.625034   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:28.656873   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:28.656887   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:28.714792   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:28.714811   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:31.225710   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:31.235567   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:31.235633   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:31.259473   54101 cri.go:89] found id: ""
	I1212 00:24:31.259487   54101 logs.go:282] 0 containers: []
	W1212 00:24:31.259494   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:31.259499   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:31.259556   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:31.284058   54101 cri.go:89] found id: ""
	I1212 00:24:31.284070   54101 logs.go:282] 0 containers: []
	W1212 00:24:31.284077   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:31.284082   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:31.284138   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:31.306894   54101 cri.go:89] found id: ""
	I1212 00:24:31.306907   54101 logs.go:282] 0 containers: []
	W1212 00:24:31.306914   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:31.306918   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:31.306978   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:31.334534   54101 cri.go:89] found id: ""
	I1212 00:24:31.334547   54101 logs.go:282] 0 containers: []
	W1212 00:24:31.334554   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:31.334559   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:31.334615   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:31.359236   54101 cri.go:89] found id: ""
	I1212 00:24:31.359250   54101 logs.go:282] 0 containers: []
	W1212 00:24:31.359258   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:31.359263   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:31.359321   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:31.383234   54101 cri.go:89] found id: ""
	I1212 00:24:31.383247   54101 logs.go:282] 0 containers: []
	W1212 00:24:31.383254   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:31.383259   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:31.383314   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:31.407612   54101 cri.go:89] found id: ""
	I1212 00:24:31.407624   54101 logs.go:282] 0 containers: []
	W1212 00:24:31.407631   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:31.407638   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:31.407650   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:31.470123   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:31.470142   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:31.497215   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:31.497231   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:31.553428   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:31.553445   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:31.564292   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:31.564307   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:31.630782   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:31.622216   14808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:31.622767   14808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:31.624650   14808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:31.625108   14808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:31.626798   14808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:31.622216   14808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:31.622767   14808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:31.624650   14808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:31.625108   14808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:31.626798   14808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:34.131141   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:34.141238   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:34.141296   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:34.166032   54101 cri.go:89] found id: ""
	I1212 00:24:34.166045   54101 logs.go:282] 0 containers: []
	W1212 00:24:34.166053   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:34.166057   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:34.166117   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:34.192065   54101 cri.go:89] found id: ""
	I1212 00:24:34.192079   54101 logs.go:282] 0 containers: []
	W1212 00:24:34.192086   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:34.192091   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:34.192146   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:34.216626   54101 cri.go:89] found id: ""
	I1212 00:24:34.216640   54101 logs.go:282] 0 containers: []
	W1212 00:24:34.216646   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:34.216652   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:34.216710   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:34.244975   54101 cri.go:89] found id: ""
	I1212 00:24:34.244989   54101 logs.go:282] 0 containers: []
	W1212 00:24:34.244997   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:34.245002   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:34.245058   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:34.269781   54101 cri.go:89] found id: ""
	I1212 00:24:34.269795   54101 logs.go:282] 0 containers: []
	W1212 00:24:34.269802   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:34.269807   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:34.269867   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:34.294651   54101 cri.go:89] found id: ""
	I1212 00:24:34.294664   54101 logs.go:282] 0 containers: []
	W1212 00:24:34.294672   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:34.294677   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:34.294740   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:34.319772   54101 cri.go:89] found id: ""
	I1212 00:24:34.319786   54101 logs.go:282] 0 containers: []
	W1212 00:24:34.319793   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:34.319801   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:34.319811   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:34.385955   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:34.377894   14892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:34.378715   14892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:34.380217   14892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:34.380694   14892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:34.382158   14892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:34.377894   14892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:34.378715   14892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:34.380217   14892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:34.380694   14892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:34.382158   14892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:34.385966   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:34.385976   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:34.451474   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:34.451493   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:34.478755   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:34.478770   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:34.538195   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:34.538217   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:37.049062   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:37.060494   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:37.060558   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:37.096756   54101 cri.go:89] found id: ""
	I1212 00:24:37.096769   54101 logs.go:282] 0 containers: []
	W1212 00:24:37.096776   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:37.096781   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:37.096857   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:37.123426   54101 cri.go:89] found id: ""
	I1212 00:24:37.123441   54101 logs.go:282] 0 containers: []
	W1212 00:24:37.123448   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:37.123453   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:37.123515   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:37.150366   54101 cri.go:89] found id: ""
	I1212 00:24:37.150379   54101 logs.go:282] 0 containers: []
	W1212 00:24:37.150387   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:37.150392   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:37.150455   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:37.176266   54101 cri.go:89] found id: ""
	I1212 00:24:37.176281   54101 logs.go:282] 0 containers: []
	W1212 00:24:37.176288   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:37.176293   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:37.176379   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:37.211184   54101 cri.go:89] found id: ""
	I1212 00:24:37.211198   54101 logs.go:282] 0 containers: []
	W1212 00:24:37.211205   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:37.211210   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:37.211278   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:37.235978   54101 cri.go:89] found id: ""
	I1212 00:24:37.235992   54101 logs.go:282] 0 containers: []
	W1212 00:24:37.235999   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:37.236005   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:37.236064   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:37.261068   54101 cri.go:89] found id: ""
	I1212 00:24:37.261082   54101 logs.go:282] 0 containers: []
	W1212 00:24:37.261089   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:37.261097   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:37.261107   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:37.318643   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:37.318661   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:37.329758   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:37.329780   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:37.396581   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:37.388347   15001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:37.388766   15001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:37.390448   15001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:37.390869   15001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:37.392485   15001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:37.388347   15001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:37.388766   15001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:37.390448   15001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:37.390869   15001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:37.392485   15001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:37.396591   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:37.396602   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:37.463371   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:37.463399   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:39.999532   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:40.021164   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:40.021239   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:40.055893   54101 cri.go:89] found id: ""
	I1212 00:24:40.055908   54101 logs.go:282] 0 containers: []
	W1212 00:24:40.055916   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:40.055921   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:40.055984   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:40.085805   54101 cri.go:89] found id: ""
	I1212 00:24:40.085821   54101 logs.go:282] 0 containers: []
	W1212 00:24:40.085831   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:40.085837   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:40.085902   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:40.113784   54101 cri.go:89] found id: ""
	I1212 00:24:40.113797   54101 logs.go:282] 0 containers: []
	W1212 00:24:40.113804   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:40.113809   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:40.113867   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:40.141930   54101 cri.go:89] found id: ""
	I1212 00:24:40.141945   54101 logs.go:282] 0 containers: []
	W1212 00:24:40.141954   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:40.141959   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:40.142018   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:40.168489   54101 cri.go:89] found id: ""
	I1212 00:24:40.168503   54101 logs.go:282] 0 containers: []
	W1212 00:24:40.168510   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:40.168515   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:40.168575   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:40.195479   54101 cri.go:89] found id: ""
	I1212 00:24:40.195494   54101 logs.go:282] 0 containers: []
	W1212 00:24:40.195501   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:40.195506   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:40.195572   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:40.225277   54101 cri.go:89] found id: ""
	I1212 00:24:40.225290   54101 logs.go:282] 0 containers: []
	W1212 00:24:40.225297   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:40.225305   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:40.225315   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:40.288821   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:40.280605   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:40.281157   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:40.282725   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:40.283252   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:40.284776   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:40.280605   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:40.281157   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:40.282725   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:40.283252   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:40.284776   15101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:40.288833   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:40.288842   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:40.351250   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:40.351269   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:40.379379   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:40.379395   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:40.435768   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:40.435785   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:42.948581   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:42.958923   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:42.958983   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:42.983729   54101 cri.go:89] found id: ""
	I1212 00:24:42.983743   54101 logs.go:282] 0 containers: []
	W1212 00:24:42.983757   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:42.983762   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:42.983823   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:43.015682   54101 cri.go:89] found id: ""
	I1212 00:24:43.015696   54101 logs.go:282] 0 containers: []
	W1212 00:24:43.015703   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:43.015708   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:43.015767   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:43.051631   54101 cri.go:89] found id: ""
	I1212 00:24:43.051644   54101 logs.go:282] 0 containers: []
	W1212 00:24:43.051658   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:43.051662   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:43.051723   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:43.088521   54101 cri.go:89] found id: ""
	I1212 00:24:43.088535   54101 logs.go:282] 0 containers: []
	W1212 00:24:43.088542   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:43.088547   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:43.088606   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:43.120828   54101 cri.go:89] found id: ""
	I1212 00:24:43.120842   54101 logs.go:282] 0 containers: []
	W1212 00:24:43.120848   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:43.120854   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:43.120916   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:43.146768   54101 cri.go:89] found id: ""
	I1212 00:24:43.146782   54101 logs.go:282] 0 containers: []
	W1212 00:24:43.146789   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:43.146794   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:43.146877   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:43.172067   54101 cri.go:89] found id: ""
	I1212 00:24:43.172081   54101 logs.go:282] 0 containers: []
	W1212 00:24:43.172089   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:43.172097   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:43.172107   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:43.183115   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:43.183131   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:43.245564   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:43.237027   15207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:43.237641   15207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:43.239314   15207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:43.239878   15207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:43.241570   15207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:43.237027   15207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:43.237641   15207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:43.239314   15207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:43.239878   15207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:43.241570   15207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:43.245574   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:43.245585   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:43.307071   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:43.307092   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:43.334124   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:43.334141   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:45.892688   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:45.902643   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:45.902701   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:45.927418   54101 cri.go:89] found id: ""
	I1212 00:24:45.927432   54101 logs.go:282] 0 containers: []
	W1212 00:24:45.927439   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:45.927444   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:45.927504   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:45.950969   54101 cri.go:89] found id: ""
	I1212 00:24:45.950982   54101 logs.go:282] 0 containers: []
	W1212 00:24:45.951005   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:45.951011   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:45.951068   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:45.977037   54101 cri.go:89] found id: ""
	I1212 00:24:45.977050   54101 logs.go:282] 0 containers: []
	W1212 00:24:45.977057   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:45.977062   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:45.977127   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:46.003570   54101 cri.go:89] found id: ""
	I1212 00:24:46.003587   54101 logs.go:282] 0 containers: []
	W1212 00:24:46.003594   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:46.003600   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:46.003668   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:46.035920   54101 cri.go:89] found id: ""
	I1212 00:24:46.035934   54101 logs.go:282] 0 containers: []
	W1212 00:24:46.035941   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:46.035946   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:46.036003   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:46.073828   54101 cri.go:89] found id: ""
	I1212 00:24:46.073842   54101 logs.go:282] 0 containers: []
	W1212 00:24:46.073849   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:46.073854   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:46.073911   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:46.106173   54101 cri.go:89] found id: ""
	I1212 00:24:46.106194   54101 logs.go:282] 0 containers: []
	W1212 00:24:46.106218   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:46.106226   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:46.106239   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:46.162624   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:46.162643   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:46.173580   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:46.173602   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:46.238544   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:46.230296   15317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:46.230879   15317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:46.232549   15317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:46.233036   15317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:46.234601   15317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:46.230296   15317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:46.230879   15317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:46.232549   15317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:46.233036   15317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:46.234601   15317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:46.238555   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:46.238566   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:46.301177   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:46.301195   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:48.831063   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:48.843168   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:48.843226   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:48.871581   54101 cri.go:89] found id: ""
	I1212 00:24:48.871598   54101 logs.go:282] 0 containers: []
	W1212 00:24:48.871605   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:48.871610   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:48.871669   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:48.896221   54101 cri.go:89] found id: ""
	I1212 00:24:48.896236   54101 logs.go:282] 0 containers: []
	W1212 00:24:48.896244   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:48.896249   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:48.896307   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:48.920455   54101 cri.go:89] found id: ""
	I1212 00:24:48.920475   54101 logs.go:282] 0 containers: []
	W1212 00:24:48.920483   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:48.920488   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:48.920550   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:48.944730   54101 cri.go:89] found id: ""
	I1212 00:24:48.944743   54101 logs.go:282] 0 containers: []
	W1212 00:24:48.944750   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:48.944755   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:48.944815   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:48.969159   54101 cri.go:89] found id: ""
	I1212 00:24:48.969172   54101 logs.go:282] 0 containers: []
	W1212 00:24:48.969179   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:48.969184   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:48.969238   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:49.001344   54101 cri.go:89] found id: ""
	I1212 00:24:49.001360   54101 logs.go:282] 0 containers: []
	W1212 00:24:49.001368   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:49.001373   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:49.001440   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:49.026664   54101 cri.go:89] found id: ""
	I1212 00:24:49.026688   54101 logs.go:282] 0 containers: []
	W1212 00:24:49.026696   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:49.026704   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:49.026715   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:49.088266   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:49.088284   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:49.099424   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:49.099438   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:49.166422   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:49.157832   15425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:49.158583   15425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:49.160190   15425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:49.160890   15425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:49.162624   15425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:49.157832   15425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:49.158583   15425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:49.160190   15425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:49.160890   15425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:49.162624   15425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:49.166432   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:49.166445   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:49.227337   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:49.227355   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:51.758903   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:51.768725   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:51.768786   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:51.792403   54101 cri.go:89] found id: ""
	I1212 00:24:51.792417   54101 logs.go:282] 0 containers: []
	W1212 00:24:51.792424   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:51.792429   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:51.792497   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:51.819996   54101 cri.go:89] found id: ""
	I1212 00:24:51.820010   54101 logs.go:282] 0 containers: []
	W1212 00:24:51.820016   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:51.820021   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:51.820080   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:51.844706   54101 cri.go:89] found id: ""
	I1212 00:24:51.844719   54101 logs.go:282] 0 containers: []
	W1212 00:24:51.844727   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:51.844732   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:51.844800   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:51.870289   54101 cri.go:89] found id: ""
	I1212 00:24:51.870303   54101 logs.go:282] 0 containers: []
	W1212 00:24:51.870316   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:51.870321   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:51.870378   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:51.894116   54101 cri.go:89] found id: ""
	I1212 00:24:51.894129   54101 logs.go:282] 0 containers: []
	W1212 00:24:51.894137   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:51.894142   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:51.894200   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:51.918453   54101 cri.go:89] found id: ""
	I1212 00:24:51.918467   54101 logs.go:282] 0 containers: []
	W1212 00:24:51.918474   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:51.918480   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:51.918538   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:51.942207   54101 cri.go:89] found id: ""
	I1212 00:24:51.942220   54101 logs.go:282] 0 containers: []
	W1212 00:24:51.942228   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:51.942235   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:51.942245   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:51.970818   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:51.970835   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:52.026675   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:52.026692   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:52.044175   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:52.044191   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:52.123266   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:52.114940   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:52.115962   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:52.117604   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:52.118040   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:52.119539   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:52.114940   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:52.115962   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:52.117604   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:52.118040   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:52.119539   15541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:52.123275   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:52.123286   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:54.689949   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:54.700000   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:54.700070   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:54.725625   54101 cri.go:89] found id: ""
	I1212 00:24:54.725638   54101 logs.go:282] 0 containers: []
	W1212 00:24:54.725645   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:54.725650   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:54.725716   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:54.748579   54101 cri.go:89] found id: ""
	I1212 00:24:54.748592   54101 logs.go:282] 0 containers: []
	W1212 00:24:54.748600   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:54.748604   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:54.748661   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:54.772796   54101 cri.go:89] found id: ""
	I1212 00:24:54.772809   54101 logs.go:282] 0 containers: []
	W1212 00:24:54.772816   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:54.772821   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:54.772876   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:54.797082   54101 cri.go:89] found id: ""
	I1212 00:24:54.797095   54101 logs.go:282] 0 containers: []
	W1212 00:24:54.797102   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:54.797107   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:54.797168   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:54.821359   54101 cri.go:89] found id: ""
	I1212 00:24:54.821372   54101 logs.go:282] 0 containers: []
	W1212 00:24:54.821379   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:54.821384   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:54.821441   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:54.848911   54101 cri.go:89] found id: ""
	I1212 00:24:54.848924   54101 logs.go:282] 0 containers: []
	W1212 00:24:54.848931   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:54.848936   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:54.848993   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:54.872383   54101 cri.go:89] found id: ""
	I1212 00:24:54.872397   54101 logs.go:282] 0 containers: []
	W1212 00:24:54.872404   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:54.872412   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:54.872422   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:54.927404   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:54.927423   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:54.938083   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:54.938099   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:55.013009   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:54.998953   15629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:55.000234   15629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:55.001265   15629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:55.004572   15629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:55.007712   15629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:54.998953   15629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:55.000234   15629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:55.001265   15629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:55.004572   15629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:55.007712   15629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:55.013021   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:55.013032   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:24:55.084355   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:55.084375   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:57.624991   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:24:57.635207   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:24:57.635270   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:24:57.662282   54101 cri.go:89] found id: ""
	I1212 00:24:57.662296   54101 logs.go:282] 0 containers: []
	W1212 00:24:57.662304   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:24:57.662309   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:24:57.662365   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:24:57.692048   54101 cri.go:89] found id: ""
	I1212 00:24:57.692061   54101 logs.go:282] 0 containers: []
	W1212 00:24:57.692068   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:24:57.692073   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:24:57.692128   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:24:57.717665   54101 cri.go:89] found id: ""
	I1212 00:24:57.717679   54101 logs.go:282] 0 containers: []
	W1212 00:24:57.717686   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:24:57.717692   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:24:57.717752   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:24:57.746206   54101 cri.go:89] found id: ""
	I1212 00:24:57.746219   54101 logs.go:282] 0 containers: []
	W1212 00:24:57.746226   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:24:57.746233   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:24:57.746291   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:24:57.772883   54101 cri.go:89] found id: ""
	I1212 00:24:57.772896   54101 logs.go:282] 0 containers: []
	W1212 00:24:57.772904   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:24:57.772909   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:24:57.772969   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:24:57.796550   54101 cri.go:89] found id: ""
	I1212 00:24:57.796564   54101 logs.go:282] 0 containers: []
	W1212 00:24:57.796571   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:24:57.796576   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:24:57.796636   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:24:57.819457   54101 cri.go:89] found id: ""
	I1212 00:24:57.819470   54101 logs.go:282] 0 containers: []
	W1212 00:24:57.819481   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:24:57.819489   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:24:57.819499   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:24:57.848789   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:24:57.848804   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:24:57.903379   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:24:57.903404   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:24:57.914134   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:24:57.914150   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:24:57.981734   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:24:57.973800   15746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:57.974813   15746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:57.975633   15746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:57.976681   15746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:57.977401   15746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:24:57.973800   15746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:57.974813   15746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:57.975633   15746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:57.976681   15746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:24:57.977401   15746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:24:57.981743   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:24:57.981764   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:00.548466   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:00.559868   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:00.559941   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:00.588355   54101 cri.go:89] found id: ""
	I1212 00:25:00.588369   54101 logs.go:282] 0 containers: []
	W1212 00:25:00.588377   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:00.588383   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:00.588446   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:00.615059   54101 cri.go:89] found id: ""
	I1212 00:25:00.615073   54101 logs.go:282] 0 containers: []
	W1212 00:25:00.615080   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:00.615085   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:00.615144   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:00.642285   54101 cri.go:89] found id: ""
	I1212 00:25:00.642299   54101 logs.go:282] 0 containers: []
	W1212 00:25:00.642307   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:00.642312   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:00.642370   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:00.670680   54101 cri.go:89] found id: ""
	I1212 00:25:00.670693   54101 logs.go:282] 0 containers: []
	W1212 00:25:00.670701   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:00.670706   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:00.670766   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:00.696244   54101 cri.go:89] found id: ""
	I1212 00:25:00.696258   54101 logs.go:282] 0 containers: []
	W1212 00:25:00.696266   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:00.696271   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:00.696386   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:00.725727   54101 cri.go:89] found id: ""
	I1212 00:25:00.725741   54101 logs.go:282] 0 containers: []
	W1212 00:25:00.725758   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:00.725764   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:00.725844   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:00.754004   54101 cri.go:89] found id: ""
	I1212 00:25:00.754018   54101 logs.go:282] 0 containers: []
	W1212 00:25:00.754025   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:00.754032   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:00.754044   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:00.766092   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:00.766108   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:00.830876   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:00.822487   15839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:00.823145   15839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:00.824701   15839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:00.825291   15839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:00.826797   15839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:00.822487   15839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:00.823145   15839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:00.824701   15839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:00.825291   15839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:00.826797   15839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:00.830886   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:00.830899   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:00.893247   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:00.893265   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:00.920729   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:00.920744   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:03.481388   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:03.491775   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:03.491838   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:03.521216   54101 cri.go:89] found id: ""
	I1212 00:25:03.521230   54101 logs.go:282] 0 containers: []
	W1212 00:25:03.521238   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:03.521243   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:03.521304   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:03.549226   54101 cri.go:89] found id: ""
	I1212 00:25:03.549240   54101 logs.go:282] 0 containers: []
	W1212 00:25:03.549247   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:03.549258   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:03.549315   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:03.577069   54101 cri.go:89] found id: ""
	I1212 00:25:03.577083   54101 logs.go:282] 0 containers: []
	W1212 00:25:03.577090   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:03.577097   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:03.577156   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:03.606566   54101 cri.go:89] found id: ""
	I1212 00:25:03.606580   54101 logs.go:282] 0 containers: []
	W1212 00:25:03.606587   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:03.606592   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:03.606652   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:03.631034   54101 cri.go:89] found id: ""
	I1212 00:25:03.631049   54101 logs.go:282] 0 containers: []
	W1212 00:25:03.631057   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:03.631062   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:03.631125   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:03.655850   54101 cri.go:89] found id: ""
	I1212 00:25:03.655864   54101 logs.go:282] 0 containers: []
	W1212 00:25:03.655871   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:03.655876   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:03.655951   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:03.682159   54101 cri.go:89] found id: ""
	I1212 00:25:03.682173   54101 logs.go:282] 0 containers: []
	W1212 00:25:03.682180   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:03.682187   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:03.682200   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:03.692956   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:03.692973   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:03.759732   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:03.751026   15944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:03.751692   15944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:03.753437   15944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:03.754061   15944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:03.755694   15944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:03.751026   15944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:03.751692   15944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:03.753437   15944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:03.754061   15944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:03.755694   15944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:03.759743   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:03.759754   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:03.821448   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:03.821467   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:03.854174   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:03.854191   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:06.412785   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:06.423128   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:06.423192   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:06.451062   54101 cri.go:89] found id: ""
	I1212 00:25:06.451075   54101 logs.go:282] 0 containers: []
	W1212 00:25:06.451082   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:06.451087   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:06.451145   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:06.476861   54101 cri.go:89] found id: ""
	I1212 00:25:06.476875   54101 logs.go:282] 0 containers: []
	W1212 00:25:06.476882   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:06.476888   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:06.476956   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:06.502250   54101 cri.go:89] found id: ""
	I1212 00:25:06.502277   54101 logs.go:282] 0 containers: []
	W1212 00:25:06.502284   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:06.502295   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:06.502363   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:06.527789   54101 cri.go:89] found id: ""
	I1212 00:25:06.527803   54101 logs.go:282] 0 containers: []
	W1212 00:25:06.527810   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:06.527816   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:06.527876   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:06.552928   54101 cri.go:89] found id: ""
	I1212 00:25:06.552942   54101 logs.go:282] 0 containers: []
	W1212 00:25:06.552950   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:06.552956   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:06.553015   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:06.580455   54101 cri.go:89] found id: ""
	I1212 00:25:06.580468   54101 logs.go:282] 0 containers: []
	W1212 00:25:06.580475   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:06.580481   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:06.580541   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:06.605618   54101 cri.go:89] found id: ""
	I1212 00:25:06.605632   54101 logs.go:282] 0 containers: []
	W1212 00:25:06.605640   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:06.605656   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:06.605667   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:06.661856   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:06.661873   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:06.673040   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:06.673057   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:06.744531   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:06.737026   16050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:06.737431   16050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:06.738919   16050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:06.739260   16050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:06.740703   16050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:06.737026   16050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:06.737431   16050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:06.738919   16050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:06.739260   16050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:06.740703   16050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:06.744541   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:06.744552   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:06.810963   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:06.810982   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:09.340882   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:09.351148   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:09.351207   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:09.376060   54101 cri.go:89] found id: ""
	I1212 00:25:09.376074   54101 logs.go:282] 0 containers: []
	W1212 00:25:09.376081   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:09.376086   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:09.376144   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:09.401509   54101 cri.go:89] found id: ""
	I1212 00:25:09.401524   54101 logs.go:282] 0 containers: []
	W1212 00:25:09.401532   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:09.401537   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:09.401594   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:09.430682   54101 cri.go:89] found id: ""
	I1212 00:25:09.430697   54101 logs.go:282] 0 containers: []
	W1212 00:25:09.430704   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:09.430709   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:09.430779   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:09.455570   54101 cri.go:89] found id: ""
	I1212 00:25:09.455583   54101 logs.go:282] 0 containers: []
	W1212 00:25:09.455590   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:09.455596   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:09.455652   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:09.480221   54101 cri.go:89] found id: ""
	I1212 00:25:09.480234   54101 logs.go:282] 0 containers: []
	W1212 00:25:09.480251   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:09.480257   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:09.480312   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:09.504553   54101 cri.go:89] found id: ""
	I1212 00:25:09.504566   54101 logs.go:282] 0 containers: []
	W1212 00:25:09.504573   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:09.504578   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:09.504634   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:09.529091   54101 cri.go:89] found id: ""
	I1212 00:25:09.529105   54101 logs.go:282] 0 containers: []
	W1212 00:25:09.529111   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:09.529119   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:09.529129   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:09.590147   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:09.590169   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:09.616705   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:09.616720   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:09.674296   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:09.674314   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:09.685008   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:09.685023   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:09.747995   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:09.740039   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:09.740945   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:09.742442   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:09.742752   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:09.744216   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:09.740039   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:09.740945   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:09.742442   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:09.742752   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:09.744216   16167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:12.248240   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:12.258577   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:12.258636   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:12.296410   54101 cri.go:89] found id: ""
	I1212 00:25:12.296425   54101 logs.go:282] 0 containers: []
	W1212 00:25:12.296432   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:12.296438   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:12.296495   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:12.322054   54101 cri.go:89] found id: ""
	I1212 00:25:12.322069   54101 logs.go:282] 0 containers: []
	W1212 00:25:12.322076   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:12.322081   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:12.322137   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:12.354557   54101 cri.go:89] found id: ""
	I1212 00:25:12.354570   54101 logs.go:282] 0 containers: []
	W1212 00:25:12.354577   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:12.354582   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:12.354643   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:12.379214   54101 cri.go:89] found id: ""
	I1212 00:25:12.379228   54101 logs.go:282] 0 containers: []
	W1212 00:25:12.379235   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:12.379240   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:12.379297   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:12.403239   54101 cri.go:89] found id: ""
	I1212 00:25:12.403253   54101 logs.go:282] 0 containers: []
	W1212 00:25:12.403261   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:12.403266   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:12.403325   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:12.429024   54101 cri.go:89] found id: ""
	I1212 00:25:12.429039   54101 logs.go:282] 0 containers: []
	W1212 00:25:12.429052   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:12.429058   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:12.429117   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:12.454240   54101 cri.go:89] found id: ""
	I1212 00:25:12.454253   54101 logs.go:282] 0 containers: []
	W1212 00:25:12.454260   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:12.454268   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:12.454279   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:12.465168   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:12.465185   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:12.530196   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:12.522373   16258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:12.522762   16258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:12.524330   16258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:12.524677   16258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:12.526171   16258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:12.522373   16258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:12.522762   16258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:12.524330   16258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:12.524677   16258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:12.526171   16258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:12.530207   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:12.530218   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:12.596659   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:12.596686   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:12.629646   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:12.629666   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:15.188117   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:15.198184   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:15.198246   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:15.222760   54101 cri.go:89] found id: ""
	I1212 00:25:15.222774   54101 logs.go:282] 0 containers: []
	W1212 00:25:15.222781   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:15.222786   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:15.222841   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:15.247134   54101 cri.go:89] found id: ""
	I1212 00:25:15.247149   54101 logs.go:282] 0 containers: []
	W1212 00:25:15.247156   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:15.247161   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:15.247220   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:15.273493   54101 cri.go:89] found id: ""
	I1212 00:25:15.273506   54101 logs.go:282] 0 containers: []
	W1212 00:25:15.273513   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:15.273518   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:15.273575   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:15.325769   54101 cri.go:89] found id: ""
	I1212 00:25:15.325782   54101 logs.go:282] 0 containers: []
	W1212 00:25:15.325790   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:15.325794   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:15.325851   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:15.352564   54101 cri.go:89] found id: ""
	I1212 00:25:15.352578   54101 logs.go:282] 0 containers: []
	W1212 00:25:15.352589   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:15.352594   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:15.352652   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:15.381006   54101 cri.go:89] found id: ""
	I1212 00:25:15.381025   54101 logs.go:282] 0 containers: []
	W1212 00:25:15.381032   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:15.381037   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:15.381094   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:15.404889   54101 cri.go:89] found id: ""
	I1212 00:25:15.404903   54101 logs.go:282] 0 containers: []
	W1212 00:25:15.404910   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:15.404917   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:15.404936   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:15.472619   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:15.464098   16363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:15.465350   16363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:15.466018   16363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:15.467674   16363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:15.468107   16363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:15.464098   16363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:15.465350   16363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:15.466018   16363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:15.467674   16363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:15.468107   16363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:15.472631   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:15.472643   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:15.533279   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:15.533297   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:15.563170   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:15.563185   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:15.622483   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:15.622499   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:18.135301   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:18.145599   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:18.145657   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:18.170223   54101 cri.go:89] found id: ""
	I1212 00:25:18.170237   54101 logs.go:282] 0 containers: []
	W1212 00:25:18.170245   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:18.170250   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:18.170317   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:18.194981   54101 cri.go:89] found id: ""
	I1212 00:25:18.195034   54101 logs.go:282] 0 containers: []
	W1212 00:25:18.195042   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:18.195047   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:18.195107   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:18.219741   54101 cri.go:89] found id: ""
	I1212 00:25:18.219754   54101 logs.go:282] 0 containers: []
	W1212 00:25:18.219762   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:18.219767   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:18.219836   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:18.244023   54101 cri.go:89] found id: ""
	I1212 00:25:18.244036   54101 logs.go:282] 0 containers: []
	W1212 00:25:18.244043   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:18.244048   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:18.244105   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:18.268830   54101 cri.go:89] found id: ""
	I1212 00:25:18.268844   54101 logs.go:282] 0 containers: []
	W1212 00:25:18.268852   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:18.268857   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:18.268920   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:18.308533   54101 cri.go:89] found id: ""
	I1212 00:25:18.308547   54101 logs.go:282] 0 containers: []
	W1212 00:25:18.308553   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:18.308558   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:18.308618   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:18.342407   54101 cri.go:89] found id: ""
	I1212 00:25:18.342420   54101 logs.go:282] 0 containers: []
	W1212 00:25:18.342426   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:18.342434   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:18.342444   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:18.411629   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:18.403777   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:18.404392   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:18.405943   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:18.406371   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:18.407842   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:18.403777   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:18.404392   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:18.405943   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:18.406371   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:18.407842   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:18.411640   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:18.411652   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:18.476356   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:18.476375   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:18.508597   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:18.508613   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:18.565071   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:18.565088   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:21.075765   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:21.087124   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:21.087190   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:21.116451   54101 cri.go:89] found id: ""
	I1212 00:25:21.116465   54101 logs.go:282] 0 containers: []
	W1212 00:25:21.116472   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:21.116477   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:21.116540   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:21.142594   54101 cri.go:89] found id: ""
	I1212 00:25:21.142607   54101 logs.go:282] 0 containers: []
	W1212 00:25:21.142615   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:21.142620   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:21.142678   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:21.167624   54101 cri.go:89] found id: ""
	I1212 00:25:21.167638   54101 logs.go:282] 0 containers: []
	W1212 00:25:21.167646   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:21.167651   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:21.167709   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:21.195907   54101 cri.go:89] found id: ""
	I1212 00:25:21.195921   54101 logs.go:282] 0 containers: []
	W1212 00:25:21.195927   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:21.195932   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:21.195987   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:21.220794   54101 cri.go:89] found id: ""
	I1212 00:25:21.220808   54101 logs.go:282] 0 containers: []
	W1212 00:25:21.220816   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:21.220821   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:21.220880   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:21.246438   54101 cri.go:89] found id: ""
	I1212 00:25:21.246451   54101 logs.go:282] 0 containers: []
	W1212 00:25:21.246462   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:21.246473   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:21.246531   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:21.271784   54101 cri.go:89] found id: ""
	I1212 00:25:21.271799   54101 logs.go:282] 0 containers: []
	W1212 00:25:21.271806   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:21.271814   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:21.271833   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:21.315787   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:21.315812   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:21.377319   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:21.377338   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:21.388870   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:21.388885   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:21.453883   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:21.444432   16587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:21.445344   16587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:21.446969   16587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:21.447534   16587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:21.449241   16587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:21.444432   16587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:21.445344   16587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:21.446969   16587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:21.447534   16587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:21.449241   16587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:21.453893   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:21.453904   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:24.019730   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:24.030732   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:24.030792   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:24.057384   54101 cri.go:89] found id: ""
	I1212 00:25:24.057397   54101 logs.go:282] 0 containers: []
	W1212 00:25:24.057404   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:24.057410   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:24.057467   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:24.087868   54101 cri.go:89] found id: ""
	I1212 00:25:24.087883   54101 logs.go:282] 0 containers: []
	W1212 00:25:24.087891   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:24.087896   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:24.087960   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:24.112813   54101 cri.go:89] found id: ""
	I1212 00:25:24.112827   54101 logs.go:282] 0 containers: []
	W1212 00:25:24.112835   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:24.112840   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:24.112900   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:24.141527   54101 cri.go:89] found id: ""
	I1212 00:25:24.141541   54101 logs.go:282] 0 containers: []
	W1212 00:25:24.141548   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:24.141553   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:24.141612   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:24.171422   54101 cri.go:89] found id: ""
	I1212 00:25:24.171436   54101 logs.go:282] 0 containers: []
	W1212 00:25:24.171444   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:24.171449   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:24.171506   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:24.196733   54101 cri.go:89] found id: ""
	I1212 00:25:24.196758   54101 logs.go:282] 0 containers: []
	W1212 00:25:24.196767   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:24.196772   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:24.196840   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:24.221142   54101 cri.go:89] found id: ""
	I1212 00:25:24.221163   54101 logs.go:282] 0 containers: []
	W1212 00:25:24.221170   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:24.221178   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:24.221188   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:24.280043   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:24.280061   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:24.294333   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:24.294347   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:24.376651   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:24.368398   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:24.368936   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:24.370665   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:24.371218   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:24.372749   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:24.368398   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:24.368936   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:24.370665   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:24.371218   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:24.372749   16682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:24.376660   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:24.376670   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:24.442437   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:24.442455   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:26.972180   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:26.982717   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:26.982778   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:27.016302   54101 cri.go:89] found id: ""
	I1212 00:25:27.016317   54101 logs.go:282] 0 containers: []
	W1212 00:25:27.016324   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:27.016329   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:27.016390   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:27.041562   54101 cri.go:89] found id: ""
	I1212 00:25:27.041576   54101 logs.go:282] 0 containers: []
	W1212 00:25:27.041583   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:27.041588   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:27.041647   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:27.067288   54101 cri.go:89] found id: ""
	I1212 00:25:27.067301   54101 logs.go:282] 0 containers: []
	W1212 00:25:27.067308   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:27.067313   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:27.067370   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:27.093958   54101 cri.go:89] found id: ""
	I1212 00:25:27.093978   54101 logs.go:282] 0 containers: []
	W1212 00:25:27.093985   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:27.093990   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:27.094046   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:27.119290   54101 cri.go:89] found id: ""
	I1212 00:25:27.119303   54101 logs.go:282] 0 containers: []
	W1212 00:25:27.119310   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:27.119321   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:27.119378   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:27.147433   54101 cri.go:89] found id: ""
	I1212 00:25:27.147446   54101 logs.go:282] 0 containers: []
	W1212 00:25:27.147452   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:27.147457   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:27.147513   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:27.172138   54101 cri.go:89] found id: ""
	I1212 00:25:27.172152   54101 logs.go:282] 0 containers: []
	W1212 00:25:27.172159   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:27.172167   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:27.172177   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:27.228777   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:27.228797   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:27.240006   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:27.240021   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:27.317423   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:27.308478   16785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:27.309592   16785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:27.311317   16785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:27.311656   16785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:27.313135   16785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:27.308478   16785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:27.309592   16785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:27.311317   16785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:27.311656   16785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:27.313135   16785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:27.317433   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:27.317444   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:27.386770   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:27.386790   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:29.918004   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:29.928163   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:29.928225   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:29.957041   54101 cri.go:89] found id: ""
	I1212 00:25:29.957055   54101 logs.go:282] 0 containers: []
	W1212 00:25:29.957062   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:29.957067   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:29.957124   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:29.982223   54101 cri.go:89] found id: ""
	I1212 00:25:29.982237   54101 logs.go:282] 0 containers: []
	W1212 00:25:29.982244   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:29.982249   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:29.982306   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:30.021601   54101 cri.go:89] found id: ""
	I1212 00:25:30.021616   54101 logs.go:282] 0 containers: []
	W1212 00:25:30.021625   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:30.021630   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:30.021707   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:30.065430   54101 cri.go:89] found id: ""
	I1212 00:25:30.065447   54101 logs.go:282] 0 containers: []
	W1212 00:25:30.065456   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:30.065462   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:30.065547   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:30.094609   54101 cri.go:89] found id: ""
	I1212 00:25:30.094623   54101 logs.go:282] 0 containers: []
	W1212 00:25:30.094630   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:30.094635   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:30.094695   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:30.122604   54101 cri.go:89] found id: ""
	I1212 00:25:30.122618   54101 logs.go:282] 0 containers: []
	W1212 00:25:30.122626   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:30.122631   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:30.122690   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:30.148645   54101 cri.go:89] found id: ""
	I1212 00:25:30.148659   54101 logs.go:282] 0 containers: []
	W1212 00:25:30.148667   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:30.148675   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:30.148685   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:30.206432   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:30.206452   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:30.218454   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:30.218469   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:30.284319   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:30.274262   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:30.275194   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:30.276848   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:30.277482   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:30.278689   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:30.274262   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:30.275194   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:30.276848   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:30.277482   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:30.278689   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:30.284328   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:30.284339   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:30.356346   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:30.356372   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:32.883437   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:32.893868   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:32.893927   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:32.917839   54101 cri.go:89] found id: ""
	I1212 00:25:32.917852   54101 logs.go:282] 0 containers: []
	W1212 00:25:32.917859   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:32.917865   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:32.917931   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:32.942885   54101 cri.go:89] found id: ""
	I1212 00:25:32.942899   54101 logs.go:282] 0 containers: []
	W1212 00:25:32.942906   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:32.942911   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:32.942974   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:32.968519   54101 cri.go:89] found id: ""
	I1212 00:25:32.968532   54101 logs.go:282] 0 containers: []
	W1212 00:25:32.968539   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:32.968544   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:32.968602   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:33.004343   54101 cri.go:89] found id: ""
	I1212 00:25:33.004357   54101 logs.go:282] 0 containers: []
	W1212 00:25:33.004365   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:33.004370   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:33.004440   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:33.033496   54101 cri.go:89] found id: ""
	I1212 00:25:33.033510   54101 logs.go:282] 0 containers: []
	W1212 00:25:33.033524   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:33.033530   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:33.033590   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:33.061868   54101 cri.go:89] found id: ""
	I1212 00:25:33.061890   54101 logs.go:282] 0 containers: []
	W1212 00:25:33.061898   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:33.061903   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:33.061969   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:33.088616   54101 cri.go:89] found id: ""
	I1212 00:25:33.088630   54101 logs.go:282] 0 containers: []
	W1212 00:25:33.088637   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:33.088645   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:33.088655   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:33.144882   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:33.144899   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:33.156391   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:33.156407   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:33.220404   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:33.211436   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:33.212144   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:33.214079   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:33.214925   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:33.216614   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:33.211436   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:33.212144   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:33.214079   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:33.214925   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:33.216614   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:33.220413   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:33.220424   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:33.291732   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:33.291751   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:35.829535   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:35.839412   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:25:35.839479   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:25:35.864609   54101 cri.go:89] found id: ""
	I1212 00:25:35.864629   54101 logs.go:282] 0 containers: []
	W1212 00:25:35.864639   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:25:35.864644   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:25:35.864705   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:25:35.888220   54101 cri.go:89] found id: ""
	I1212 00:25:35.888234   54101 logs.go:282] 0 containers: []
	W1212 00:25:35.888241   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:25:35.888245   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:25:35.888304   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:25:35.911726   54101 cri.go:89] found id: ""
	I1212 00:25:35.911739   54101 logs.go:282] 0 containers: []
	W1212 00:25:35.911746   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:25:35.911751   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:25:35.911812   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:25:35.937495   54101 cri.go:89] found id: ""
	I1212 00:25:35.937510   54101 logs.go:282] 0 containers: []
	W1212 00:25:35.937517   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:25:35.937522   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:25:35.937578   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:25:35.962276   54101 cri.go:89] found id: ""
	I1212 00:25:35.962290   54101 logs.go:282] 0 containers: []
	W1212 00:25:35.962296   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:25:35.962301   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:25:35.962360   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:25:35.985962   54101 cri.go:89] found id: ""
	I1212 00:25:35.985981   54101 logs.go:282] 0 containers: []
	W1212 00:25:35.985989   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:25:35.985994   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:25:35.986056   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:25:36.012853   54101 cri.go:89] found id: ""
	I1212 00:25:36.012867   54101 logs.go:282] 0 containers: []
	W1212 00:25:36.012875   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:25:36.012882   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:25:36.012895   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:25:36.069296   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:25:36.069315   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:25:36.080983   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:25:36.081000   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:25:36.149041   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:25:36.139864   17098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:36.140552   17098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:36.142366   17098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:36.143049   17098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:36.144891   17098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:25:36.139864   17098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:36.140552   17098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:36.142366   17098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:36.143049   17098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:25:36.144891   17098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:25:36.149053   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:25:36.149064   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:25:36.210509   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:25:36.210528   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:25:38.743061   54101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:25:38.752984   54101 kubeadm.go:602] duration metric: took 4m3.726857079s to restartPrimaryControlPlane
	W1212 00:25:38.753047   54101 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 00:25:38.753120   54101 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1212 00:25:39.158817   54101 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:25:39.172695   54101 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:25:39.181725   54101 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:25:39.181785   54101 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:25:39.189823   54101 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:25:39.189833   54101 kubeadm.go:158] found existing configuration files:
	
	I1212 00:25:39.189882   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 00:25:39.197507   54101 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 00:25:39.197568   54101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 00:25:39.206290   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 00:25:39.215918   54101 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 00:25:39.215979   54101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:25:39.224009   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 00:25:39.231677   54101 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 00:25:39.231744   54101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:25:39.239027   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 00:25:39.246759   54101 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 00:25:39.246820   54101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:25:39.254322   54101 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:25:39.294892   54101 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 00:25:39.294976   54101 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 00:25:39.369123   54101 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:25:39.369186   54101 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 00:25:39.369220   54101 kubeadm.go:319] OS: Linux
	I1212 00:25:39.369264   54101 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 00:25:39.369311   54101 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 00:25:39.369356   54101 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 00:25:39.369403   54101 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 00:25:39.369450   54101 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 00:25:39.369496   54101 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 00:25:39.369541   54101 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 00:25:39.369587   54101 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 00:25:39.369632   54101 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 00:25:39.438649   54101 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:25:39.438759   54101 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:25:39.438849   54101 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 00:25:39.447406   54101 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:25:39.452683   54101 out.go:252]   - Generating certificates and keys ...
	I1212 00:25:39.452767   54101 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 00:25:39.452831   54101 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 00:25:39.452906   54101 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 00:25:39.452965   54101 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 00:25:39.453033   54101 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 00:25:39.453085   54101 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 00:25:39.453148   54101 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 00:25:39.453208   54101 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 00:25:39.453281   54101 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 00:25:39.453353   54101 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 00:25:39.453389   54101 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 00:25:39.453445   54101 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:25:39.710711   54101 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:25:40.209307   54101 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 00:25:40.334299   54101 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:25:40.657582   54101 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:25:40.893171   54101 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:25:40.893926   54101 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:25:40.896489   54101 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:25:40.899767   54101 out.go:252]   - Booting up control plane ...
	I1212 00:25:40.899871   54101 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:25:40.899953   54101 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:25:40.900236   54101 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:25:40.921621   54101 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:25:40.921722   54101 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 00:25:40.928629   54101 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 00:25:40.928898   54101 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:25:40.928939   54101 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 00:25:41.061713   54101 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 00:25:41.061825   54101 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 00:29:41.062316   54101 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001026811s
	I1212 00:29:41.062606   54101 kubeadm.go:319] 
	I1212 00:29:41.062683   54101 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 00:29:41.062716   54101 kubeadm.go:319] 	- The kubelet is not running
	I1212 00:29:41.062821   54101 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 00:29:41.062826   54101 kubeadm.go:319] 
	I1212 00:29:41.062929   54101 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 00:29:41.062960   54101 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 00:29:41.063008   54101 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 00:29:41.063012   54101 kubeadm.go:319] 
	I1212 00:29:41.067208   54101 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 00:29:41.067622   54101 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 00:29:41.067731   54101 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:29:41.067994   54101 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 00:29:41.067998   54101 kubeadm.go:319] 
	I1212 00:29:41.068065   54101 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1212 00:29:41.068164   54101 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001026811s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 00:29:41.068252   54101 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1212 00:29:41.482759   54101 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:29:41.496287   54101 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 00:29:41.496351   54101 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:29:41.504378   54101 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:29:41.504387   54101 kubeadm.go:158] found existing configuration files:
	
	I1212 00:29:41.504442   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 00:29:41.512585   54101 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 00:29:41.512640   54101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 00:29:41.520530   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 00:29:41.528262   54101 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 00:29:41.528318   54101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:29:41.536111   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 00:29:41.543998   54101 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 00:29:41.544056   54101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:29:41.551686   54101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 00:29:41.559774   54101 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 00:29:41.559831   54101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:29:41.567115   54101 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 00:29:41.604105   54101 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 00:29:41.604156   54101 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 00:29:41.681810   54101 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 00:29:41.681880   54101 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 00:29:41.681919   54101 kubeadm.go:319] OS: Linux
	I1212 00:29:41.681969   54101 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 00:29:41.682023   54101 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 00:29:41.682069   54101 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 00:29:41.682134   54101 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 00:29:41.682195   54101 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 00:29:41.682256   54101 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 00:29:41.682310   54101 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 00:29:41.682358   54101 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 00:29:41.682410   54101 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 00:29:41.751743   54101 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:29:41.751870   54101 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:29:41.751978   54101 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 00:29:41.757399   54101 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:29:41.762811   54101 out.go:252]   - Generating certificates and keys ...
	I1212 00:29:41.762902   54101 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 00:29:41.762969   54101 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 00:29:41.763059   54101 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 00:29:41.763119   54101 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 00:29:41.763187   54101 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 00:29:41.763239   54101 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 00:29:41.763301   54101 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 00:29:41.763361   54101 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 00:29:41.763434   54101 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 00:29:41.763505   54101 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 00:29:41.763542   54101 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 00:29:41.763596   54101 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:29:42.025181   54101 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:29:42.229266   54101 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 00:29:42.409579   54101 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:29:42.479383   54101 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:29:43.146782   54101 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:29:43.147428   54101 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:29:43.150122   54101 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:29:43.153470   54101 out.go:252]   - Booting up control plane ...
	I1212 00:29:43.153571   54101 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:29:43.153647   54101 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:29:43.153712   54101 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:29:43.174954   54101 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:29:43.175084   54101 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 00:29:43.182722   54101 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 00:29:43.183334   54101 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:29:43.183511   54101 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 00:29:43.327482   54101 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 00:29:43.327594   54101 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 00:33:43.326577   54101 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001134626s
	I1212 00:33:43.326601   54101 kubeadm.go:319] 
	I1212 00:33:43.326657   54101 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 00:33:43.326688   54101 kubeadm.go:319] 	- The kubelet is not running
	I1212 00:33:43.326791   54101 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 00:33:43.326796   54101 kubeadm.go:319] 
	I1212 00:33:43.326899   54101 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 00:33:43.326930   54101 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 00:33:43.326959   54101 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 00:33:43.326962   54101 kubeadm.go:319] 
	I1212 00:33:43.331146   54101 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 00:33:43.331567   54101 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 00:33:43.331673   54101 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:33:43.331909   54101 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 00:33:43.331913   54101 kubeadm.go:319] 
	I1212 00:33:43.331980   54101 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 00:33:43.332070   54101 kubeadm.go:403] duration metric: took 12m8.353678295s to StartCluster
	I1212 00:33:43.332098   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:33:43.332159   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:33:43.356905   54101 cri.go:89] found id: ""
	I1212 00:33:43.356919   54101 logs.go:282] 0 containers: []
	W1212 00:33:43.356925   54101 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:33:43.356930   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 00:33:43.356985   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:33:43.381448   54101 cri.go:89] found id: ""
	I1212 00:33:43.381464   54101 logs.go:282] 0 containers: []
	W1212 00:33:43.381471   54101 logs.go:284] No container was found matching "etcd"
	I1212 00:33:43.381477   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 00:33:43.381541   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:33:43.409467   54101 cri.go:89] found id: ""
	I1212 00:33:43.409480   54101 logs.go:282] 0 containers: []
	W1212 00:33:43.409487   54101 logs.go:284] No container was found matching "coredns"
	I1212 00:33:43.409492   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:33:43.409550   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:33:43.434352   54101 cri.go:89] found id: ""
	I1212 00:33:43.434367   54101 logs.go:282] 0 containers: []
	W1212 00:33:43.434375   54101 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:33:43.434381   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:33:43.434439   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:33:43.458566   54101 cri.go:89] found id: ""
	I1212 00:33:43.458581   54101 logs.go:282] 0 containers: []
	W1212 00:33:43.458588   54101 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:33:43.458593   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:33:43.458661   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:33:43.482646   54101 cri.go:89] found id: ""
	I1212 00:33:43.482660   54101 logs.go:282] 0 containers: []
	W1212 00:33:43.482667   54101 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:33:43.482672   54101 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 00:33:43.482728   54101 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:33:43.507433   54101 cri.go:89] found id: ""
	I1212 00:33:43.507445   54101 logs.go:282] 0 containers: []
	W1212 00:33:43.507452   54101 logs.go:284] No container was found matching "kindnet"
	I1212 00:33:43.507461   54101 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:33:43.507472   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:33:43.575281   54101 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:33:43.567196   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:33:43.568177   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:33:43.569762   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:33:43.570292   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:33:43.571460   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 00:33:43.567196   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:33:43.568177   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:33:43.569762   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:33:43.570292   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:33:43.571460   20881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:33:43.575296   54101 logs.go:123] Gathering logs for containerd ...
	I1212 00:33:43.575305   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 00:33:43.637567   54101 logs.go:123] Gathering logs for container status ...
	I1212 00:33:43.637585   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 00:33:43.665505   54101 logs.go:123] Gathering logs for kubelet ...
	I1212 00:33:43.665520   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:33:43.723897   54101 logs.go:123] Gathering logs for dmesg ...
	I1212 00:33:43.723913   54101 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1212 00:33:43.734646   54101 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001134626s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 00:33:43.734686   54101 out.go:285] * 
	W1212 00:33:43.734800   54101 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001134626s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 00:33:43.734860   54101 out.go:285] * 
	W1212 00:33:43.737311   54101 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:33:43.743292   54101 out.go:203] 
	W1212 00:33:43.746156   54101 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001134626s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 00:33:43.746395   54101 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 00:33:43.746473   54101 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 00:33:43.751052   54101 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.272455867Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.272520269Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.272625542Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.272714248Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.272776665Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.272836596Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.272893384Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.272958435Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.273027211Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.273124763Z" level=info msg="Connect containerd service"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.273469529Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.274122072Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.287381427Z" level=info msg="Start subscribing containerd event"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.287554622Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.287703153Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.287625047Z" level=info msg="Start recovering state"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.327211013Z" level=info msg="Start event monitor"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.327399462Z" level=info msg="Start cni network conf syncer for default"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.327470929Z" level=info msg="Start streaming server"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.327536341Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.327597642Z" level=info msg="runtime interface starting up..."
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.327652682Z" level=info msg="starting plugins..."
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.327716215Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 00:21:33 functional-767012 containerd[9667]: time="2025-12-12T00:21:33.327919745Z" level=info msg="containerd successfully booted in 0.080422s"
	Dec 12 00:21:33 functional-767012 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:35:32.588896   22337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:35:32.589405   22337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:35:32.591173   22337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:35:32.591476   22337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:35:32.593062   22337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec11 23:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014465] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.504479] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.038126] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.726220] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.947343] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 00:35:32 up  1:17,  0 user,  load average: 0.18, 0.19, 0.33
	Linux functional-767012 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 00:35:29 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:35:29 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 462.
	Dec 12 00:35:29 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:29 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:29 functional-767012 kubelet[22220]: E1212 00:35:29.828300   22220 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:35:29 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:35:29 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:35:30 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 463.
	Dec 12 00:35:30 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:30 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:30 functional-767012 kubelet[22226]: E1212 00:35:30.580832   22226 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:35:30 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:35:30 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:35:31 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 464.
	Dec 12 00:35:31 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:31 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:31 functional-767012 kubelet[22231]: E1212 00:35:31.346676   22231 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:35:31 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:35:31 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:35:32 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 465.
	Dec 12 00:35:32 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:32 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:32 functional-767012 kubelet[22251]: E1212 00:35:32.101516   22251 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:35:32 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:35:32 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-767012 -n functional-767012
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-767012 -n functional-767012: exit status 2 (345.49631ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-767012" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1212 00:34:02.170301    4290 retry.go:31] will retry after 1.957667561s: Temporary Error: Get "http://10.96.147.68": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1212 00:34:14.128720    4290 retry.go:31] will retry after 4.979091465s: Temporary Error: Get "http://10.96.147.68": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1212 00:34:29.108974    4290 retry.go:31] will retry after 8.52860364s: Temporary Error: Get "http://10.96.147.68": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1212 00:34:47.639177    4290 retry.go:31] will retry after 13.013307959s: Temporary Error: Get "http://10.96.147.68": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1212 00:35:10.653524    4290 retry.go:31] will retry after 10.179166436s: Temporary Error: Get "http://10.96.147.68": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1212 00:36:44.690145    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-767012 -n functional-767012
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-767012 -n functional-767012: exit status 2 (309.930519ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-767012" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-767012
helpers_test.go:244: (dbg) docker inspect functional-767012:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e",
	        "Created": "2025-12-12T00:06:52.261765556Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42951,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T00:06:52.317917194Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/hostname",
	        "HostsPath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/hosts",
	        "LogPath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e-json.log",
	        "Name": "/functional-767012",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-767012:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-767012",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e",
	                "LowerDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70-init/diff:/var/lib/docker/overlay2/4c0e5370e4fd7b4e6c6a79620ef377d7d55826709cd277e0cfa49c6005af0314/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-767012",
	                "Source": "/var/lib/docker/volumes/functional-767012/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-767012",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-767012",
	                "name.minikube.sigs.k8s.io": "functional-767012",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e781257da3adf1d3284ab2a6de0168c3db7957f25a7e53d0015250294193762d",
	            "SandboxKey": "/var/run/docker/netns/e781257da3ad",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-767012": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:4d:78:ba:7d:83",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "83467cc4cb13818b98ec0d7cb5fc0064ea6eb2c8db4256a8a81330921aa2d9a4",
	                    "EndpointID": "b787b732d8d748776ceeb6e65fab51cc1e79758446bc85ac20043b35593fab12",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-767012",
	                        "6585a82fe5e6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-767012 -n functional-767012
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-767012 -n functional-767012: exit status 2 (320.645132ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-767012 image load --daemon kicbase/echo-server:functional-767012 --alsologtostderr                                                                   │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ image          │ functional-767012 image ls                                                                                                                                      │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ image          │ functional-767012 image save kicbase/echo-server:functional-767012 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ image          │ functional-767012 image rm kicbase/echo-server:functional-767012 --alsologtostderr                                                                              │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ image          │ functional-767012 image ls                                                                                                                                      │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ image          │ functional-767012 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ image          │ functional-767012 image ls                                                                                                                                      │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ image          │ functional-767012 image save --daemon kicbase/echo-server:functional-767012 --alsologtostderr                                                                   │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ ssh            │ functional-767012 ssh sudo cat /etc/test/nested/copy/4290/hosts                                                                                                 │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ ssh            │ functional-767012 ssh sudo cat /etc/ssl/certs/4290.pem                                                                                                          │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ ssh            │ functional-767012 ssh sudo cat /usr/share/ca-certificates/4290.pem                                                                                              │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ ssh            │ functional-767012 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                        │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ ssh            │ functional-767012 ssh sudo cat /etc/ssl/certs/42902.pem                                                                                                         │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ ssh            │ functional-767012 ssh sudo cat /usr/share/ca-certificates/42902.pem                                                                                             │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ ssh            │ functional-767012 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:36 UTC │
	│ image          │ functional-767012 image ls --format short --alsologtostderr                                                                                                     │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:36 UTC │ 12 Dec 25 00:36 UTC │
	│ image          │ functional-767012 image ls --format yaml --alsologtostderr                                                                                                      │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:36 UTC │ 12 Dec 25 00:36 UTC │
	│ ssh            │ functional-767012 ssh pgrep buildkitd                                                                                                                           │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:36 UTC │                     │
	│ image          │ functional-767012 image build -t localhost/my-image:functional-767012 testdata/build --alsologtostderr                                                          │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:36 UTC │ 12 Dec 25 00:36 UTC │
	│ image          │ functional-767012 image ls                                                                                                                                      │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:36 UTC │ 12 Dec 25 00:36 UTC │
	│ image          │ functional-767012 image ls --format json --alsologtostderr                                                                                                      │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:36 UTC │ 12 Dec 25 00:36 UTC │
	│ image          │ functional-767012 image ls --format table --alsologtostderr                                                                                                     │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:36 UTC │ 12 Dec 25 00:36 UTC │
	│ update-context │ functional-767012 update-context --alsologtostderr -v=2                                                                                                         │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:36 UTC │ 12 Dec 25 00:36 UTC │
	│ update-context │ functional-767012 update-context --alsologtostderr -v=2                                                                                                         │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:36 UTC │ 12 Dec 25 00:36 UTC │
	│ update-context │ functional-767012 update-context --alsologtostderr -v=2                                                                                                         │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:36 UTC │ 12 Dec 25 00:36 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:35:48
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:35:48.421297   71358 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:35:48.421486   71358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:35:48.421517   71358 out.go:374] Setting ErrFile to fd 2...
	I1212 00:35:48.421538   71358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:35:48.421819   71358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	I1212 00:35:48.422212   71358 out.go:368] Setting JSON to false
	I1212 00:35:48.423061   71358 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4695,"bootTime":1765495054,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1212 00:35:48.423162   71358 start.go:143] virtualization:  
	I1212 00:35:48.426514   71358 out.go:179] * [functional-767012] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 00:35:48.429578   71358 notify.go:221] Checking for updates...
	I1212 00:35:48.430099   71358 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:35:48.433220   71358 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:35:48.436246   71358 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 00:35:48.439180   71358 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	I1212 00:35:48.441913   71358 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:35:48.444801   71358 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:35:48.448078   71358 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 00:35:48.448720   71358 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:35:48.471395   71358 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 00:35:48.471520   71358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:35:48.541609   71358 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 00:35:48.526256938 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:35:48.541956   71358 docker.go:319] overlay module found
	I1212 00:35:48.547082   71358 out.go:179] * Using the docker driver based on existing profile
	I1212 00:35:48.549944   71358 start.go:309] selected driver: docker
	I1212 00:35:48.549967   71358 start.go:927] validating driver "docker" against &{Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:35:48.550052   71358 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:35:48.550151   71358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:35:48.627972   71358 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 00:35:48.618983237 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:35:48.628394   71358 cni.go:84] Creating CNI manager for ""
	I1212 00:35:48.628445   71358 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 00:35:48.628480   71358 start.go:353] cluster config:
	{Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:35:48.633520   71358 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 12 00:35:53 functional-767012 containerd[9667]: time="2025-12-12T00:35:53.380825675Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 00:35:53 functional-767012 containerd[9667]: time="2025-12-12T00:35:53.381423076Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-767012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 00:35:54 functional-767012 containerd[9667]: time="2025-12-12T00:35:54.431712064Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-767012\""
	Dec 12 00:35:54 functional-767012 containerd[9667]: time="2025-12-12T00:35:54.434504007Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-767012\""
	Dec 12 00:35:54 functional-767012 containerd[9667]: time="2025-12-12T00:35:54.436942844Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 12 00:35:54 functional-767012 containerd[9667]: time="2025-12-12T00:35:54.446189179Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-767012\" returns successfully"
	Dec 12 00:35:54 functional-767012 containerd[9667]: time="2025-12-12T00:35:54.686173769Z" level=info msg="No images store for sha256:cf0e9913d0048fc6b8fcd5596db3f32511553fbf5636773b0afb4b58b09f08d6"
	Dec 12 00:35:54 functional-767012 containerd[9667]: time="2025-12-12T00:35:54.688384040Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-767012\""
	Dec 12 00:35:54 functional-767012 containerd[9667]: time="2025-12-12T00:35:54.695294587Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 00:35:54 functional-767012 containerd[9667]: time="2025-12-12T00:35:54.695845785Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-767012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 00:35:55 functional-767012 containerd[9667]: time="2025-12-12T00:35:55.481041900Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-767012\""
	Dec 12 00:35:55 functional-767012 containerd[9667]: time="2025-12-12T00:35:55.483575262Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-767012\""
	Dec 12 00:35:55 functional-767012 containerd[9667]: time="2025-12-12T00:35:55.485491357Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 12 00:35:55 functional-767012 containerd[9667]: time="2025-12-12T00:35:55.493932508Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-767012\" returns successfully"
	Dec 12 00:35:56 functional-767012 containerd[9667]: time="2025-12-12T00:35:56.155064892Z" level=info msg="No images store for sha256:121be4686d244c220df0f7b34f3d349beec79e2f1acbf4e41d94b7ff44846cc2"
	Dec 12 00:35:56 functional-767012 containerd[9667]: time="2025-12-12T00:35:56.157650251Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-767012\""
	Dec 12 00:35:56 functional-767012 containerd[9667]: time="2025-12-12T00:35:56.165824402Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 00:35:56 functional-767012 containerd[9667]: time="2025-12-12T00:35:56.166149626Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-767012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 00:36:04 functional-767012 containerd[9667]: time="2025-12-12T00:36:04.140804142Z" level=info msg="connecting to shim t5k4og08wxu3hn18nx7enxg22" address="unix:///run/containerd/s/f4f7c1c05a2a629fbe955eaab87f295972e866d78a113d6e4352d869065a2330" namespace=k8s.io protocol=ttrpc version=3
	Dec 12 00:36:04 functional-767012 containerd[9667]: time="2025-12-12T00:36:04.219262086Z" level=info msg="shim disconnected" id=t5k4og08wxu3hn18nx7enxg22 namespace=k8s.io
	Dec 12 00:36:04 functional-767012 containerd[9667]: time="2025-12-12T00:36:04.220087446Z" level=info msg="cleaning up after shim disconnected" id=t5k4og08wxu3hn18nx7enxg22 namespace=k8s.io
	Dec 12 00:36:04 functional-767012 containerd[9667]: time="2025-12-12T00:36:04.220213594Z" level=info msg="cleaning up dead shim" id=t5k4og08wxu3hn18nx7enxg22 namespace=k8s.io
	Dec 12 00:36:04 functional-767012 containerd[9667]: time="2025-12-12T00:36:04.521122080Z" level=info msg="ImageCreate event name:\"localhost/my-image:functional-767012\""
	Dec 12 00:36:04 functional-767012 containerd[9667]: time="2025-12-12T00:36:04.529721715Z" level=info msg="ImageCreate event name:\"sha256:29179774bb553e6bbead5da7f0ea2f255bf02b6fc404c1c7cebcea17a3ffcc75\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 00:36:04 functional-767012 containerd[9667]: time="2025-12-12T00:36:04.530211217Z" level=info msg="ImageUpdate event name:\"localhost/my-image:functional-767012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:37:53.719925   25091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:53.720692   25091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:53.722353   25091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:53.723089   25091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:37:53.724656   25091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec11 23:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014465] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.504479] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.038126] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.726220] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.947343] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 00:37:53 up  1:20,  0 user,  load average: 0.11, 0.21, 0.32
	Linux functional-767012 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 00:37:50 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:37:50 functional-767012 kubelet[24957]: E1212 00:37:50.828574   24957 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:37:50 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:37:50 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:37:51 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 651.
	Dec 12 00:37:51 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:37:51 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:37:51 functional-767012 kubelet[24963]: E1212 00:37:51.577139   24963 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:37:51 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:37:51 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:37:52 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 652.
	Dec 12 00:37:52 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:37:52 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:37:52 functional-767012 kubelet[24969]: E1212 00:37:52.331165   24969 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:37:52 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:37:52 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:37:53 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 653.
	Dec 12 00:37:53 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:37:53 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:37:53 functional-767012 kubelet[24998]: E1212 00:37:53.100965   24998 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:37:53 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:37:53 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:37:53 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 654.
	Dec 12 00:37:53 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:37:53 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-767012 -n functional-767012
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-767012 -n functional-767012: exit status 2 (315.769898ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-767012" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (1.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-767012 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-767012 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (72.728619ms)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-767012 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-767012
helpers_test.go:244: (dbg) docker inspect functional-767012:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e",
	        "Created": "2025-12-12T00:06:52.261765556Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42951,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T00:06:52.317917194Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/hostname",
	        "HostsPath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/hosts",
	        "LogPath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e-json.log",
	        "Name": "/functional-767012",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-767012:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-767012",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e",
	                "LowerDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70-init/diff:/var/lib/docker/overlay2/4c0e5370e4fd7b4e6c6a79620ef377d7d55826709cd277e0cfa49c6005af0314/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-767012",
	                "Source": "/var/lib/docker/volumes/functional-767012/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-767012",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-767012",
	                "name.minikube.sigs.k8s.io": "functional-767012",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e781257da3adf1d3284ab2a6de0168c3db7957f25a7e53d0015250294193762d",
	            "SandboxKey": "/var/run/docker/netns/e781257da3ad",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-767012": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:4d:78:ba:7d:83",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "83467cc4cb13818b98ec0d7cb5fc0064ea6eb2c8db4256a8a81330921aa2d9a4",
	                    "EndpointID": "b787b732d8d748776ceeb6e65fab51cc1e79758446bc85ac20043b35593fab12",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-767012",
	                        "6585a82fe5e6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-767012 -n functional-767012
E1212 00:35:57.042499    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-767012 -n functional-767012: exit status 2 (303.947368ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount     │ -p functional-767012 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo322877521/001:/mount2 --alsologtostderr -v=1                             │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ ssh       │ functional-767012 ssh findmnt -T /mount1                                                                                                                        │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ ssh       │ functional-767012 ssh findmnt -T /mount1                                                                                                                        │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ ssh       │ functional-767012 ssh findmnt -T /mount2                                                                                                                        │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ ssh       │ functional-767012 ssh findmnt -T /mount3                                                                                                                        │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ mount     │ -p functional-767012 --kill=true                                                                                                                                │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ start     │ -p functional-767012 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0             │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ start     │ -p functional-767012 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0             │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ start     │ -p functional-767012 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                       │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-767012 --alsologtostderr -v=1                                                                                                  │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ license   │                                                                                                                                                                 │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ ssh       │ functional-767012 ssh sudo systemctl is-active docker                                                                                                           │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ ssh       │ functional-767012 ssh sudo systemctl is-active crio                                                                                                             │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │                     │
	│ image     │ functional-767012 image load --daemon kicbase/echo-server:functional-767012 --alsologtostderr                                                                   │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ image     │ functional-767012 image ls                                                                                                                                      │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ image     │ functional-767012 image load --daemon kicbase/echo-server:functional-767012 --alsologtostderr                                                                   │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ image     │ functional-767012 image ls                                                                                                                                      │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ image     │ functional-767012 image load --daemon kicbase/echo-server:functional-767012 --alsologtostderr                                                                   │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ image     │ functional-767012 image ls                                                                                                                                      │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ image     │ functional-767012 image save kicbase/echo-server:functional-767012 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ image     │ functional-767012 image rm kicbase/echo-server:functional-767012 --alsologtostderr                                                                              │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ image     │ functional-767012 image ls                                                                                                                                      │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ image     │ functional-767012 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ image     │ functional-767012 image ls                                                                                                                                      │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	│ image     │ functional-767012 image save --daemon kicbase/echo-server:functional-767012 --alsologtostderr                                                                   │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:35 UTC │ 12 Dec 25 00:35 UTC │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 00:35:48
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:35:48.421297   71358 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:35:48.421486   71358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:35:48.421517   71358 out.go:374] Setting ErrFile to fd 2...
	I1212 00:35:48.421538   71358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:35:48.421819   71358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	I1212 00:35:48.422212   71358 out.go:368] Setting JSON to false
	I1212 00:35:48.423061   71358 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4695,"bootTime":1765495054,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1212 00:35:48.423162   71358 start.go:143] virtualization:  
	I1212 00:35:48.426514   71358 out.go:179] * [functional-767012] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 00:35:48.429578   71358 notify.go:221] Checking for updates...
	I1212 00:35:48.430099   71358 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:35:48.433220   71358 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:35:48.436246   71358 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 00:35:48.439180   71358 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	I1212 00:35:48.441913   71358 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:35:48.444801   71358 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:35:48.448078   71358 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 00:35:48.448720   71358 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:35:48.471395   71358 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 00:35:48.471520   71358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:35:48.541609   71358 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 00:35:48.526256938 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:35:48.541956   71358 docker.go:319] overlay module found
	I1212 00:35:48.547082   71358 out.go:179] * Using the docker driver based on existing profile
	I1212 00:35:48.549944   71358 start.go:309] selected driver: docker
	I1212 00:35:48.549967   71358 start.go:927] validating driver "docker" against &{Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:35:48.550052   71358 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:35:48.550151   71358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:35:48.627972   71358 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 00:35:48.618983237 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:35:48.628394   71358 cni.go:84] Creating CNI manager for ""
	I1212 00:35:48.628445   71358 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 00:35:48.628480   71358 start.go:353] cluster config:
	{Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:35:48.633520   71358 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 12 00:35:52 functional-767012 containerd[9667]: time="2025-12-12T00:35:52.323547036Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-767012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 00:35:53 functional-767012 containerd[9667]: time="2025-12-12T00:35:53.125691421Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-767012\""
	Dec 12 00:35:53 functional-767012 containerd[9667]: time="2025-12-12T00:35:53.128345368Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-767012\""
	Dec 12 00:35:53 functional-767012 containerd[9667]: time="2025-12-12T00:35:53.131077913Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 12 00:35:53 functional-767012 containerd[9667]: time="2025-12-12T00:35:53.139169528Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-767012\" returns successfully"
	Dec 12 00:35:53 functional-767012 containerd[9667]: time="2025-12-12T00:35:53.370949733Z" level=info msg="No images store for sha256:cf0e9913d0048fc6b8fcd5596db3f32511553fbf5636773b0afb4b58b09f08d6"
	Dec 12 00:35:53 functional-767012 containerd[9667]: time="2025-12-12T00:35:53.373050866Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-767012\""
	Dec 12 00:35:53 functional-767012 containerd[9667]: time="2025-12-12T00:35:53.380825675Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 00:35:53 functional-767012 containerd[9667]: time="2025-12-12T00:35:53.381423076Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-767012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 00:35:54 functional-767012 containerd[9667]: time="2025-12-12T00:35:54.431712064Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-767012\""
	Dec 12 00:35:54 functional-767012 containerd[9667]: time="2025-12-12T00:35:54.434504007Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-767012\""
	Dec 12 00:35:54 functional-767012 containerd[9667]: time="2025-12-12T00:35:54.436942844Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 12 00:35:54 functional-767012 containerd[9667]: time="2025-12-12T00:35:54.446189179Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-767012\" returns successfully"
	Dec 12 00:35:54 functional-767012 containerd[9667]: time="2025-12-12T00:35:54.686173769Z" level=info msg="No images store for sha256:cf0e9913d0048fc6b8fcd5596db3f32511553fbf5636773b0afb4b58b09f08d6"
	Dec 12 00:35:54 functional-767012 containerd[9667]: time="2025-12-12T00:35:54.688384040Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-767012\""
	Dec 12 00:35:54 functional-767012 containerd[9667]: time="2025-12-12T00:35:54.695294587Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 00:35:54 functional-767012 containerd[9667]: time="2025-12-12T00:35:54.695845785Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-767012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 00:35:55 functional-767012 containerd[9667]: time="2025-12-12T00:35:55.481041900Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-767012\""
	Dec 12 00:35:55 functional-767012 containerd[9667]: time="2025-12-12T00:35:55.483575262Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-767012\""
	Dec 12 00:35:55 functional-767012 containerd[9667]: time="2025-12-12T00:35:55.485491357Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 12 00:35:55 functional-767012 containerd[9667]: time="2025-12-12T00:35:55.493932508Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-767012\" returns successfully"
	Dec 12 00:35:56 functional-767012 containerd[9667]: time="2025-12-12T00:35:56.155064892Z" level=info msg="No images store for sha256:121be4686d244c220df0f7b34f3d349beec79e2f1acbf4e41d94b7ff44846cc2"
	Dec 12 00:35:56 functional-767012 containerd[9667]: time="2025-12-12T00:35:56.157650251Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-767012\""
	Dec 12 00:35:56 functional-767012 containerd[9667]: time="2025-12-12T00:35:56.165824402Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 00:35:56 functional-767012 containerd[9667]: time="2025-12-12T00:35:56.166149626Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-767012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 00:35:57.735135   23739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:35:57.735864   23739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:35:57.737594   23739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:35:57.738176   23739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 00:35:57.739760   23739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec11 23:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014465] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.504479] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.038126] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.726220] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.947343] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 00:35:57 up  1:18,  0 user,  load average: 0.50, 0.27, 0.35
	Linux functional-767012 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 00:35:54 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:35:55 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 496.
	Dec 12 00:35:55 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:55 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:55 functional-767012 kubelet[23518]: E1212 00:35:55.342565   23518 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:35:55 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:35:55 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:35:56 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 497.
	Dec 12 00:35:56 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:56 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:56 functional-767012 kubelet[23579]: E1212 00:35:56.087849   23579 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:35:56 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:35:56 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:35:56 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 498.
	Dec 12 00:35:56 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:56 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:56 functional-767012 kubelet[23636]: E1212 00:35:56.841567   23636 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:35:56 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:35:56 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 00:35:57 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 499.
	Dec 12 00:35:57 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:57 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 00:35:57 functional-767012 kubelet[23702]: E1212 00:35:57.597988   23702 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 00:35:57 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 00:35:57 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-767012 -n functional-767012
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-767012 -n functional-767012: exit status 2 (318.447765ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-767012" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (1.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-767012 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-767012 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1212 00:33:51.607289   67142 out.go:360] Setting OutFile to fd 1 ...
I1212 00:33:51.607448   67142 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:33:51.607455   67142 out.go:374] Setting ErrFile to fd 2...
I1212 00:33:51.607460   67142 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:33:51.607861   67142 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
I1212 00:33:51.608210   67142 mustload.go:66] Loading cluster: functional-767012
I1212 00:33:51.608890   67142 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1212 00:33:51.609556   67142 cli_runner.go:164] Run: docker container inspect functional-767012 --format={{.State.Status}}
I1212 00:33:51.645008   67142 host.go:66] Checking if "functional-767012" exists ...
I1212 00:33:51.645331   67142 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1212 00:33:51.762545   67142 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 00:33:51.750588762 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1212 00:33:51.762667   67142 api_server.go:166] Checking apiserver status ...
I1212 00:33:51.762725   67142 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1212 00:33:51.762774   67142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
I1212 00:33:51.803929   67142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
W1212 00:33:51.925459   67142 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1212 00:33:51.928815   67142 out.go:179] * The control-plane node functional-767012 apiserver is not running: (state=Stopped)
I1212 00:33:51.932385   67142 out.go:179]   To start a cluster, run: "minikube start -p functional-767012"

                                                
                                                
stdout: * The control-plane node functional-767012 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-767012"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-767012 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 67143: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-767012 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-767012 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-767012 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-767012 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-767012 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-767012 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-767012 apply -f testdata/testsvc.yaml: exit status 1 (117.59549ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-767012 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (98.73s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://10.96.147.68": Temporary Error: Get "http://10.96.147.68": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-767012 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-767012 get svc nginx-svc: exit status 1 (61.378265ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-767012 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (98.73s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-767012 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-767012 create deployment hello-node --image kicbase/echo-server: exit status 1 (57.729705ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-767012 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-767012 service list: exit status 103 (268.114657ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-767012 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-767012"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-arm64 -p functional-767012 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-767012 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-767012\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-767012 service list -o json: exit status 103 (259.519743ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-767012 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-767012"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-arm64 -p functional-767012 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-767012 service --namespace=default --https --url hello-node: exit status 103 (296.199443ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-767012 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-767012"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-767012 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-767012 service hello-node --url --format={{.IP}}: exit status 103 (263.896535ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-767012 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-767012"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-767012 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-767012 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-767012\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-767012 service hello-node --url: exit status 103 (271.134393ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-767012 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-767012"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-767012 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-767012 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-767012"
functional_test.go:1579: failed to parse "* The control-plane node functional-767012 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-767012\"": parse "* The control-plane node functional-767012 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-767012\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-767012 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2804369017/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765499738560712358" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2804369017/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765499738560712358" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2804369017/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765499738560712358" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2804369017/001/test-1765499738560712358
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-767012 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (356.88728ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 00:35:38.917900    4290 retry.go:31] will retry after 517.571736ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 12 00:35 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 12 00:35 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 12 00:35 test-1765499738560712358
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 ssh cat /mount-9p/test-1765499738560712358
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-767012 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-767012 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (64.699106ms)

                                                
                                                
** stderr ** 
	error: error when deleting "testdata/busybox-mount-test.yaml": Delete "https://192.168.49.2:8441/api/v1/namespaces/default/pods/busybox-mount": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-767012 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-767012 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (263.72798ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=38097)
	total 2
	-rw-r--r-- 1 docker docker 24 Dec 12 00:35 created-by-test
	-rw-r--r-- 1 docker docker 24 Dec 12 00:35 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Dec 12 00:35 test-1765499738560712358
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-arm64 -p functional-767012 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-767012 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2804369017/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-767012 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2804369017/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2804369017/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:38097
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2804369017/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-767012 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2804369017/001:/mount-9p --alsologtostderr -v=1] stderr:
I1212 00:35:38.616478   69412 out.go:360] Setting OutFile to fd 1 ...
I1212 00:35:38.616697   69412 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:35:38.616715   69412 out.go:374] Setting ErrFile to fd 2...
I1212 00:35:38.616732   69412 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:35:38.616996   69412 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
I1212 00:35:38.617309   69412 mustload.go:66] Loading cluster: functional-767012
I1212 00:35:38.617719   69412 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1212 00:35:38.618232   69412 cli_runner.go:164] Run: docker container inspect functional-767012 --format={{.State.Status}}
I1212 00:35:38.637260   69412 host.go:66] Checking if "functional-767012" exists ...
I1212 00:35:38.637578   69412 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1212 00:35:38.712811   69412 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 00:35:38.698693347 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1212 00:35:38.712966   69412 cli_runner.go:164] Run: docker network inspect functional-767012 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1212 00:35:38.745727   69412 out.go:179] * Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2804369017/001 into VM as /mount-9p ...
I1212 00:35:38.749341   69412 out.go:179]   - Mount type:   9p
I1212 00:35:38.752253   69412 out.go:179]   - User ID:      docker
I1212 00:35:38.755162   69412 out.go:179]   - Group ID:     docker
I1212 00:35:38.757939   69412 out.go:179]   - Version:      9p2000.L
I1212 00:35:38.760725   69412 out.go:179]   - Message Size: 262144
I1212 00:35:38.763587   69412 out.go:179]   - Options:      map[]
I1212 00:35:38.766599   69412 out.go:179]   - Bind Address: 192.168.49.1:38097
I1212 00:35:38.769486   69412 out.go:179] * Userspace file server: 
I1212 00:35:38.769735   69412 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1212 00:35:38.769833   69412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
I1212 00:35:38.805111   69412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
I1212 00:35:38.926322   69412 mount.go:180] unmount for /mount-9p ran successfully
I1212 00:35:38.926372   69412 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1212 00:35:38.934813   69412 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=38097,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1212 00:35:38.946066   69412 main.go:127] stdlog: ufs.go:141 connected
I1212 00:35:38.946255   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Tversion tag 65535 msize 262144 version '9P2000.L'
I1212 00:35:38.946317   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rversion tag 65535 msize 262144 version '9P2000'
I1212 00:35:38.946539   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1212 00:35:38.946596   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rattach tag 0 aqid (ed6ccb ffc19bd 'd')
I1212 00:35:38.946892   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Tstat tag 0 fid 0
I1212 00:35:38.946982   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed6ccb ffc19bd 'd') m d775 at 0 mt 1765499738 l 4096 t 0 d 0 ext )
I1212 00:35:38.948331   69412 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/.mount-process: {Name:mke12541701e1ef80fecdb49497a07b7f9cf079c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 00:35:38.948537   69412 mount.go:105] mount successful: ""
I1212 00:35:38.952169   69412 out.go:179] * Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2804369017/001 to /mount-9p
I1212 00:35:38.955060   69412 out.go:203] 
I1212 00:35:38.957811   69412 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1212 00:35:39.971413   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Tstat tag 0 fid 0
I1212 00:35:39.971488   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed6ccb ffc19bd 'd') m d775 at 0 mt 1765499738 l 4096 t 0 d 0 ext )
I1212 00:35:39.971832   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Twalk tag 0 fid 0 newfid 1 
I1212 00:35:39.971904   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rwalk tag 0 
I1212 00:35:39.972056   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Topen tag 0 fid 1 mode 0
I1212 00:35:39.972128   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Ropen tag 0 qid (ed6ccb ffc19bd 'd') iounit 0
I1212 00:35:39.972236   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Tstat tag 0 fid 0
I1212 00:35:39.972276   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed6ccb ffc19bd 'd') m d775 at 0 mt 1765499738 l 4096 t 0 d 0 ext )
I1212 00:35:39.972422   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Tread tag 0 fid 1 offset 0 count 262120
I1212 00:35:39.972543   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rread tag 0 count 258
I1212 00:35:39.972674   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Tread tag 0 fid 1 offset 258 count 261862
I1212 00:35:39.972708   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rread tag 0 count 0
I1212 00:35:39.972826   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Tread tag 0 fid 1 offset 258 count 262120
I1212 00:35:39.972853   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rread tag 0 count 0
I1212 00:35:39.973000   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1212 00:35:39.973031   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rwalk tag 0 (ed6ccc ffc19bd '') 
I1212 00:35:39.973145   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Tstat tag 0 fid 2
I1212 00:35:39.973180   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (ed6ccc ffc19bd '') m 644 at 0 mt 1765499738 l 24 t 0 d 0 ext )
I1212 00:35:39.973323   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Tstat tag 0 fid 2
I1212 00:35:39.973357   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (ed6ccc ffc19bd '') m 644 at 0 mt 1765499738 l 24 t 0 d 0 ext )
I1212 00:35:39.973472   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Tclunk tag 0 fid 2
I1212 00:35:39.973506   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rclunk tag 0
I1212 00:35:39.973646   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1212 00:35:39.973677   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rwalk tag 0 (ed6ccd ffc19bd '') 
I1212 00:35:39.973800   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Tstat tag 0 fid 2
I1212 00:35:39.973834   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (ed6ccd ffc19bd '') m 644 at 0 mt 1765499738 l 24 t 0 d 0 ext )
I1212 00:35:39.973958   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Tstat tag 0 fid 2
I1212 00:35:39.973991   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (ed6ccd ffc19bd '') m 644 at 0 mt 1765499738 l 24 t 0 d 0 ext )
I1212 00:35:39.974103   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Tclunk tag 0 fid 2
I1212 00:35:39.974125   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rclunk tag 0
I1212 00:35:39.974272   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Twalk tag 0 fid 0 newfid 2 0:'test-1765499738560712358' 
I1212 00:35:39.974305   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rwalk tag 0 (ed6cce ffc19bd '') 
I1212 00:35:39.974420   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Tstat tag 0 fid 2
I1212 00:35:39.974449   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rstat tag 0 st ('test-1765499738560712358' 'jenkins' 'jenkins' '' q (ed6cce ffc19bd '') m 644 at 0 mt 1765499738 l 24 t 0 d 0 ext )
I1212 00:35:39.974578   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Tstat tag 0 fid 2
I1212 00:35:39.974612   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rstat tag 0 st ('test-1765499738560712358' 'jenkins' 'jenkins' '' q (ed6cce ffc19bd '') m 644 at 0 mt 1765499738 l 24 t 0 d 0 ext )
I1212 00:35:39.974737   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Tclunk tag 0 fid 2
I1212 00:35:39.974759   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rclunk tag 0
I1212 00:35:39.974870   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Tread tag 0 fid 1 offset 258 count 262120
I1212 00:35:39.974898   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rread tag 0 count 0
I1212 00:35:39.975049   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Tclunk tag 0 fid 1
I1212 00:35:39.975080   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rclunk tag 0
I1212 00:35:40.247539   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Twalk tag 0 fid 0 newfid 1 0:'test-1765499738560712358' 
I1212 00:35:40.247612   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rwalk tag 0 (ed6cce ffc19bd '') 
I1212 00:35:40.247815   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Tstat tag 0 fid 1
I1212 00:35:40.247863   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rstat tag 0 st ('test-1765499738560712358' 'jenkins' 'jenkins' '' q (ed6cce ffc19bd '') m 644 at 0 mt 1765499738 l 24 t 0 d 0 ext )
I1212 00:35:40.248035   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Twalk tag 0 fid 1 newfid 2 
I1212 00:35:40.248066   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rwalk tag 0 
I1212 00:35:40.248217   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Topen tag 0 fid 2 mode 0
I1212 00:35:40.248269   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Ropen tag 0 qid (ed6cce ffc19bd '') iounit 0
I1212 00:35:40.248404   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Tstat tag 0 fid 1
I1212 00:35:40.248445   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rstat tag 0 st ('test-1765499738560712358' 'jenkins' 'jenkins' '' q (ed6cce ffc19bd '') m 644 at 0 mt 1765499738 l 24 t 0 d 0 ext )
I1212 00:35:40.248610   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Tread tag 0 fid 2 offset 0 count 262120
I1212 00:35:40.248656   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rread tag 0 count 24
I1212 00:35:40.248783   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Tread tag 0 fid 2 offset 24 count 262120
I1212 00:35:40.248817   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rread tag 0 count 0
I1212 00:35:40.248971   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Tread tag 0 fid 2 offset 24 count 262120
I1212 00:35:40.249022   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rread tag 0 count 0
I1212 00:35:40.249268   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Tclunk tag 0 fid 2
I1212 00:35:40.249317   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rclunk tag 0
I1212 00:35:40.249463   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Tclunk tag 0 fid 1
I1212 00:35:40.249497   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rclunk tag 0
I1212 00:35:40.581701   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Tstat tag 0 fid 0
I1212 00:35:40.581775   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed6ccb ffc19bd 'd') m d775 at 0 mt 1765499738 l 4096 t 0 d 0 ext )
I1212 00:35:40.582111   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Twalk tag 0 fid 0 newfid 1 
I1212 00:35:40.582155   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rwalk tag 0 
I1212 00:35:40.582321   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Topen tag 0 fid 1 mode 0
I1212 00:35:40.582385   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Ropen tag 0 qid (ed6ccb ffc19bd 'd') iounit 0
I1212 00:35:40.582527   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Tstat tag 0 fid 0
I1212 00:35:40.582566   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed6ccb ffc19bd 'd') m d775 at 0 mt 1765499738 l 4096 t 0 d 0 ext )
I1212 00:35:40.582719   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Tread tag 0 fid 1 offset 0 count 262120
I1212 00:35:40.582821   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rread tag 0 count 258
I1212 00:35:40.582944   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Tread tag 0 fid 1 offset 258 count 261862
I1212 00:35:40.582970   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rread tag 0 count 0
I1212 00:35:40.583108   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Tread tag 0 fid 1 offset 258 count 262120
I1212 00:35:40.583136   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rread tag 0 count 0
I1212 00:35:40.583266   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1212 00:35:40.583300   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rwalk tag 0 (ed6ccc ffc19bd '') 
I1212 00:35:40.583440   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Tstat tag 0 fid 2
I1212 00:35:40.583494   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (ed6ccc ffc19bd '') m 644 at 0 mt 1765499738 l 24 t 0 d 0 ext )
I1212 00:35:40.583638   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Tstat tag 0 fid 2
I1212 00:35:40.583674   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (ed6ccc ffc19bd '') m 644 at 0 mt 1765499738 l 24 t 0 d 0 ext )
I1212 00:35:40.583799   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Tclunk tag 0 fid 2
I1212 00:35:40.583827   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rclunk tag 0
I1212 00:35:40.583977   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1212 00:35:40.584009   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rwalk tag 0 (ed6ccd ffc19bd '') 
I1212 00:35:40.584120   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Tstat tag 0 fid 2
I1212 00:35:40.584159   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (ed6ccd ffc19bd '') m 644 at 0 mt 1765499738 l 24 t 0 d 0 ext )
I1212 00:35:40.584293   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Tstat tag 0 fid 2
I1212 00:35:40.584336   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (ed6ccd ffc19bd '') m 644 at 0 mt 1765499738 l 24 t 0 d 0 ext )
I1212 00:35:40.584450   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Tclunk tag 0 fid 2
I1212 00:35:40.584477   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rclunk tag 0
I1212 00:35:40.584630   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Twalk tag 0 fid 0 newfid 2 0:'test-1765499738560712358' 
I1212 00:35:40.584665   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rwalk tag 0 (ed6cce ffc19bd '') 
I1212 00:35:40.584777   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Tstat tag 0 fid 2
I1212 00:35:40.584807   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rstat tag 0 st ('test-1765499738560712358' 'jenkins' 'jenkins' '' q (ed6cce ffc19bd '') m 644 at 0 mt 1765499738 l 24 t 0 d 0 ext )
I1212 00:35:40.584934   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Tstat tag 0 fid 2
I1212 00:35:40.584968   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rstat tag 0 st ('test-1765499738560712358' 'jenkins' 'jenkins' '' q (ed6cce ffc19bd '') m 644 at 0 mt 1765499738 l 24 t 0 d 0 ext )
I1212 00:35:40.585081   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Tclunk tag 0 fid 2
I1212 00:35:40.585101   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rclunk tag 0
I1212 00:35:40.585225   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Tread tag 0 fid 1 offset 258 count 262120
I1212 00:35:40.585254   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rread tag 0 count 0
I1212 00:35:40.585386   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Tclunk tag 0 fid 1
I1212 00:35:40.585432   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rclunk tag 0
I1212 00:35:40.586647   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1212 00:35:40.586708   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rerror tag 0 ename 'file not found' ecode 0
I1212 00:35:40.868288   69412 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53434 Tclunk tag 0 fid 0
I1212 00:35:40.868344   69412 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53434 Rclunk tag 0
I1212 00:35:40.869438   69412 main.go:127] stdlog: ufs.go:147 disconnected
I1212 00:35:40.892223   69412 out.go:179] * Unmounting /mount-9p ...
I1212 00:35:40.895064   69412 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1212 00:35:40.901932   69412 mount.go:180] unmount for /mount-9p ran successfully
I1212 00:35:40.902038   69412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/.mount-process: {Name:mke12541701e1ef80fecdb49497a07b7f9cf079c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 00:35:40.905119   69412 out.go:203] 
W1212 00:35:40.908035   69412 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1212 00:35:40.910930   69412 out.go:203] 
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.43s)

                                                
                                    
x
+
TestKubernetesUpgrade (805.54s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-439215 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-439215 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (43.785594099s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-439215
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-439215: (1.542548884s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-439215 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-439215 status --format={{.Host}}: exit status 7 (143.964314ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-439215 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-439215 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 109 (12m34.33040635s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-439215] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22101
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-439215" primary control-plane node in "kubernetes-upgrade-439215" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 01:06:58.124032  203848 out.go:360] Setting OutFile to fd 1 ...
	I1212 01:06:58.124146  203848 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:06:58.124200  203848 out.go:374] Setting ErrFile to fd 2...
	I1212 01:06:58.124207  203848 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:06:58.124464  203848 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	I1212 01:06:58.124823  203848 out.go:368] Setting JSON to false
	I1212 01:06:58.125751  203848 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6565,"bootTime":1765495054,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1212 01:06:58.125823  203848 start.go:143] virtualization:  
	I1212 01:06:58.129462  203848 out.go:179] * [kubernetes-upgrade-439215] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 01:06:58.132639  203848 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 01:06:58.132707  203848 notify.go:221] Checking for updates...
	I1212 01:06:58.139464  203848 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 01:06:58.142391  203848 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 01:06:58.145401  203848 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	I1212 01:06:58.148179  203848 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 01:06:58.151107  203848 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 01:06:58.154489  203848 config.go:182] Loaded profile config "kubernetes-upgrade-439215": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1212 01:06:58.155150  203848 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 01:06:58.196370  203848 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 01:06:58.196484  203848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:06:58.306035  203848 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 01:06:58.296643438 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:06:58.306142  203848 docker.go:319] overlay module found
	I1212 01:06:58.309318  203848 out.go:179] * Using the docker driver based on existing profile
	I1212 01:06:58.312126  203848 start.go:309] selected driver: docker
	I1212 01:06:58.312147  203848 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-439215 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-439215 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:06:58.312253  203848 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 01:06:58.312964  203848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:06:58.397482  203848 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 01:06:58.387646186 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:06:58.398394  203848 cni.go:84] Creating CNI manager for ""
	I1212 01:06:58.398517  203848 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 01:06:58.398607  203848 start.go:353] cluster config:
	{Name:kubernetes-upgrade-439215 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-439215 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:06:58.403756  203848 out.go:179] * Starting "kubernetes-upgrade-439215" primary control-plane node in "kubernetes-upgrade-439215" cluster
	I1212 01:06:58.406682  203848 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1212 01:06:58.409549  203848 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 01:06:58.415112  203848 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 01:06:58.415163  203848 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1212 01:06:58.415174  203848 cache.go:65] Caching tarball of preloaded images
	I1212 01:06:58.415259  203848 preload.go:238] Found /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1212 01:06:58.415268  203848 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1212 01:06:58.415385  203848 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/kubernetes-upgrade-439215/config.json ...
	I1212 01:06:58.415628  203848 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 01:06:58.459262  203848 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 01:06:58.459287  203848 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 01:06:58.459302  203848 cache.go:243] Successfully downloaded all kic artifacts
	I1212 01:06:58.459338  203848 start.go:360] acquireMachinesLock for kubernetes-upgrade-439215: {Name:mka96ebb7d0d88b2ddb12629c1c49b1a05393c51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:06:58.459404  203848 start.go:364] duration metric: took 44.218µs to acquireMachinesLock for "kubernetes-upgrade-439215"
	I1212 01:06:58.459429  203848 start.go:96] Skipping create...Using existing machine configuration
	I1212 01:06:58.459435  203848 fix.go:54] fixHost starting: 
	I1212 01:06:58.459752  203848 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-439215 --format={{.State.Status}}
	I1212 01:06:58.496607  203848 fix.go:112] recreateIfNeeded on kubernetes-upgrade-439215: state=Stopped err=<nil>
	W1212 01:06:58.496638  203848 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 01:06:58.503377  203848 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-439215" ...
	I1212 01:06:58.503484  203848 cli_runner.go:164] Run: docker start kubernetes-upgrade-439215
	I1212 01:06:58.860435  203848 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-439215 --format={{.State.Status}}
	I1212 01:06:58.885274  203848 kic.go:430] container "kubernetes-upgrade-439215" state is running.
	I1212 01:06:58.885657  203848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-439215
	I1212 01:06:58.909359  203848 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/kubernetes-upgrade-439215/config.json ...
	I1212 01:06:58.909588  203848 machine.go:94] provisionDockerMachine start ...
	I1212 01:06:58.909667  203848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-439215
	I1212 01:06:58.955197  203848 main.go:143] libmachine: Using SSH client type: native
	I1212 01:06:58.955540  203848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33013 <nil> <nil>}
	I1212 01:06:58.955549  203848 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 01:06:58.956240  203848 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57286->127.0.0.1:33013: read: connection reset by peer
	I1212 01:07:02.110533  203848 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-439215
	
	I1212 01:07:02.110555  203848 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-439215"
	I1212 01:07:02.110621  203848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-439215
	I1212 01:07:02.128225  203848 main.go:143] libmachine: Using SSH client type: native
	I1212 01:07:02.128531  203848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33013 <nil> <nil>}
	I1212 01:07:02.128548  203848 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-439215 && echo "kubernetes-upgrade-439215" | sudo tee /etc/hostname
	I1212 01:07:02.292716  203848 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-439215
	
	I1212 01:07:02.292833  203848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-439215
	I1212 01:07:02.311612  203848 main.go:143] libmachine: Using SSH client type: native
	I1212 01:07:02.311921  203848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33013 <nil> <nil>}
	I1212 01:07:02.311944  203848 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-439215' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-439215/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-439215' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:07:02.459350  203848 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:07:02.459377  203848 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-2343/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-2343/.minikube}
	I1212 01:07:02.459407  203848 ubuntu.go:190] setting up certificates
	I1212 01:07:02.459424  203848 provision.go:84] configureAuth start
	I1212 01:07:02.459495  203848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-439215
	I1212 01:07:02.477503  203848 provision.go:143] copyHostCerts
	I1212 01:07:02.477590  203848 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem, removing ...
	I1212 01:07:02.477603  203848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem
	I1212 01:07:02.477680  203848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem (1082 bytes)
	I1212 01:07:02.477790  203848 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem, removing ...
	I1212 01:07:02.477800  203848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem
	I1212 01:07:02.477828  203848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem (1123 bytes)
	I1212 01:07:02.477901  203848 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem, removing ...
	I1212 01:07:02.477911  203848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem
	I1212 01:07:02.477937  203848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem (1675 bytes)
	I1212 01:07:02.478035  203848 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-439215 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-439215 localhost minikube]
	I1212 01:07:02.958583  203848 provision.go:177] copyRemoteCerts
	I1212 01:07:02.958653  203848 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:07:02.958693  203848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-439215
	I1212 01:07:02.976026  203848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/kubernetes-upgrade-439215/id_rsa Username:docker}
	I1212 01:07:03.083285  203848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1212 01:07:03.102420  203848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1212 01:07:03.120694  203848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 01:07:03.139721  203848 provision.go:87] duration metric: took 680.279222ms to configureAuth
	I1212 01:07:03.139748  203848 ubuntu.go:206] setting minikube options for container-runtime
	I1212 01:07:03.139945  203848 config.go:182] Loaded profile config "kubernetes-upgrade-439215": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 01:07:03.139961  203848 machine.go:97] duration metric: took 4.230356219s to provisionDockerMachine
	I1212 01:07:03.139970  203848 start.go:293] postStartSetup for "kubernetes-upgrade-439215" (driver="docker")
	I1212 01:07:03.139981  203848 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:07:03.140043  203848 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:07:03.140096  203848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-439215
	I1212 01:07:03.158248  203848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/kubernetes-upgrade-439215/id_rsa Username:docker}
	I1212 01:07:03.267012  203848 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:07:03.270248  203848 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 01:07:03.270274  203848 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 01:07:03.270285  203848 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/addons for local assets ...
	I1212 01:07:03.270337  203848 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/files for local assets ...
	I1212 01:07:03.270424  203848 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem -> 42902.pem in /etc/ssl/certs
	I1212 01:07:03.270546  203848 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:07:03.278026  203848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /etc/ssl/certs/42902.pem (1708 bytes)
	I1212 01:07:03.298561  203848 start.go:296] duration metric: took 158.575819ms for postStartSetup
	I1212 01:07:03.298643  203848 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 01:07:03.298714  203848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-439215
	I1212 01:07:03.316048  203848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/kubernetes-upgrade-439215/id_rsa Username:docker}
	I1212 01:07:03.416034  203848 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 01:07:03.421022  203848 fix.go:56] duration metric: took 4.961572746s for fixHost
	I1212 01:07:03.421048  203848 start.go:83] releasing machines lock for "kubernetes-upgrade-439215", held for 4.961631511s
	I1212 01:07:03.421122  203848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-439215
	I1212 01:07:03.438705  203848 ssh_runner.go:195] Run: cat /version.json
	I1212 01:07:03.438731  203848 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:07:03.438762  203848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-439215
	I1212 01:07:03.438810  203848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-439215
	I1212 01:07:03.462394  203848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/kubernetes-upgrade-439215/id_rsa Username:docker}
	I1212 01:07:03.466887  203848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/kubernetes-upgrade-439215/id_rsa Username:docker}
	I1212 01:07:03.566800  203848 ssh_runner.go:195] Run: systemctl --version
	I1212 01:07:03.667921  203848 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:07:03.672399  203848 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:07:03.672553  203848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:07:03.680572  203848 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 01:07:03.680602  203848 start.go:496] detecting cgroup driver to use...
	I1212 01:07:03.680634  203848 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 01:07:03.680684  203848 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 01:07:03.698579  203848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 01:07:03.712426  203848 docker.go:218] disabling cri-docker service (if available) ...
	I1212 01:07:03.712511  203848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:07:03.728199  203848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:07:03.741636  203848 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:07:03.870329  203848 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:07:03.988207  203848 docker.go:234] disabling docker service ...
	I1212 01:07:03.988284  203848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:07:04.004318  203848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:07:04.021347  203848 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:07:04.136137  203848 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:07:04.253986  203848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:07:04.267088  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:07:04.282294  203848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 01:07:04.293208  203848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 01:07:04.302839  203848 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 01:07:04.302943  203848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 01:07:04.312741  203848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 01:07:04.321810  203848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 01:07:04.330799  203848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 01:07:04.340132  203848 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:07:04.348465  203848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 01:07:04.359356  203848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 01:07:04.369389  203848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 01:07:04.378779  203848 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:07:04.386483  203848 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:07:04.394212  203848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:07:04.500461  203848 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 01:07:04.652505  203848 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1212 01:07:04.652605  203848 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1212 01:07:04.656620  203848 start.go:564] Will wait 60s for crictl version
	I1212 01:07:04.656708  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:07:04.660371  203848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 01:07:04.684815  203848 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1212 01:07:04.684942  203848 ssh_runner.go:195] Run: containerd --version
	I1212 01:07:04.706779  203848 ssh_runner.go:195] Run: containerd --version
	I1212 01:07:04.730782  203848 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1212 01:07:04.733627  203848 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-439215 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 01:07:04.750282  203848 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1212 01:07:04.754305  203848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:07:04.763852  203848 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-439215 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-439215 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:07:04.763970  203848 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 01:07:04.764032  203848 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:07:04.788226  203848 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1212 01:07:04.788301  203848 ssh_runner.go:195] Run: which lz4
	I1212 01:07:04.791951  203848 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1212 01:07:04.795454  203848 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 01:07:04.795499  203848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (305624510 bytes)
	I1212 01:07:07.875677  203848 containerd.go:563] duration metric: took 3.08376503s to copy over tarball
	I1212 01:07:07.875780  203848 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 01:07:09.717414  203848 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.841587554s)
	I1212 01:07:09.717481  203848 kubeadm.go:910] preload failed, will try to load cached images: extracting tarball: 
	** stderr ** 
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
	tar: Exiting with failure status due to previous errors
	
	** /stderr **: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: Process exited with status 2
	stdout:
	
	stderr:
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
	tar: Exiting with failure status due to previous errors
	I1212 01:07:09.717566  203848 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:07:09.743173  203848 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1212 01:07:09.743196  203848 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 01:07:09.743257  203848 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:07:09.743474  203848 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 01:07:09.743575  203848 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 01:07:09.743662  203848 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 01:07:09.743761  203848 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 01:07:09.743854  203848 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1212 01:07:09.743963  203848 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1212 01:07:09.744051  203848 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1212 01:07:09.745181  203848 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 01:07:09.745584  203848 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1212 01:07:09.745882  203848 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 01:07:09.746101  203848 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:07:09.746326  203848 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1212 01:07:09.746466  203848 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1212 01:07:09.747110  203848 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 01:07:09.747438  203848 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 01:07:10.067469  203848 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1212 01:07:10.067557  203848 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1212 01:07:10.090976  203848 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
	I1212 01:07:10.091077  203848 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 01:07:10.110401  203848 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
	I1212 01:07:10.110503  203848 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 01:07:10.115360  203848 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
	I1212 01:07:10.115437  203848 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 01:07:10.115839  203848 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1212 01:07:10.115877  203848 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1212 01:07:10.116261  203848 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1212 01:07:10.116296  203848 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1212 01:07:10.132926  203848 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1212 01:07:10.133008  203848 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1212 01:07:10.133084  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:07:10.133213  203848 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1212 01:07:10.133251  203848 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 01:07:10.133298  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:07:10.148026  203848 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
	I1212 01:07:10.148175  203848 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 01:07:10.157023  203848 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1212 01:07:10.157075  203848 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 01:07:10.157143  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:07:10.181679  203848 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1212 01:07:10.181722  203848 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1212 01:07:10.181775  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:07:10.181839  203848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 01:07:10.181871  203848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1212 01:07:10.181949  203848 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1212 01:07:10.181984  203848 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 01:07:10.182037  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:07:10.182177  203848 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1212 01:07:10.182217  203848 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1212 01:07:10.182273  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:07:10.209961  203848 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1212 01:07:10.210003  203848 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 01:07:10.210055  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:07:10.210146  203848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 01:07:10.237561  203848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1212 01:07:10.237642  203848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1212 01:07:10.237691  203848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1212 01:07:10.237734  203848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 01:07:10.237652  203848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 01:07:10.255624  203848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 01:07:10.255575  203848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 01:07:10.348111  203848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 01:07:10.348232  203848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1212 01:07:10.348364  203848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1212 01:07:10.348423  203848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1212 01:07:10.348487  203848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 01:07:10.348528  203848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 01:07:10.348597  203848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 01:07:10.425668  203848 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1212 01:07:10.453272  203848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 01:07:10.453373  203848 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1212 01:07:10.453479  203848 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1212 01:07:10.453572  203848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1212 01:07:10.453657  203848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1212 01:07:10.453738  203848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 01:07:10.453806  203848 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1212 01:07:10.516323  203848 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1212 01:07:10.516489  203848 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1212 01:07:10.516533  203848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1212 01:07:10.516544  203848 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1212 01:07:10.516500  203848 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1212 01:07:10.516649  203848 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1212 01:07:10.516760  203848 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1212 01:07:10.521403  203848 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1212 01:07:10.521438  203848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1212 01:07:10.552206  203848 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1212 01:07:10.552298  203848 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1212 01:07:10.715472  203848 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1212 01:07:10.715559  203848 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	W1212 01:07:11.030718  203848 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1212 01:07:11.030970  203848 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1212 01:07:11.031074  203848 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:07:11.373440  203848 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1212 01:07:11.373479  203848 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:07:11.373544  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:07:11.377860  203848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:07:11.505920  203848 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1212 01:07:11.506029  203848 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1212 01:07:11.510486  203848 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1212 01:07:11.510533  203848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1212 01:07:11.596147  203848 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1212 01:07:11.596223  203848 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1212 01:07:11.983887  203848 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1212 01:07:11.983993  203848 cache_images.go:94] duration metric: took 2.240781907s to LoadCachedImages
	W1212 01:07:11.984080  203848 out.go:285] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0: no such file or directory
	I1212 01:07:11.984121  203848 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1212 01:07:11.984369  203848 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-439215 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-439215 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:07:11.984457  203848 ssh_runner.go:195] Run: sudo crictl info
	I1212 01:07:12.011486  203848 cni.go:84] Creating CNI manager for ""
	I1212 01:07:12.011570  203848 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 01:07:12.011611  203848 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 01:07:12.011665  203848 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-439215 NodeName:kubernetes-upgrade-439215 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/
certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 01:07:12.011829  203848 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "kubernetes-upgrade-439215"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:07:12.011959  203848 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 01:07:12.021216  203848 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 01:07:12.021343  203848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:07:12.029774  203848 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (336 bytes)
	I1212 01:07:12.044070  203848 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 01:07:12.056869  203848 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2245 bytes)
	I1212 01:07:12.070008  203848 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1212 01:07:12.074592  203848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:07:12.085820  203848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:07:12.216257  203848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:07:12.234274  203848 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/kubernetes-upgrade-439215 for IP: 192.168.76.2
	I1212 01:07:12.234336  203848 certs.go:195] generating shared ca certs ...
	I1212 01:07:12.234368  203848 certs.go:227] acquiring lock for ca certs: {Name:mk18ed2fce74cbc4ee01c0f71e2dbdd98ccce1cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:07:12.234545  203848 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key
	I1212 01:07:12.234626  203848 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key
	I1212 01:07:12.234659  203848 certs.go:257] generating profile certs ...
	I1212 01:07:12.234772  203848 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/kubernetes-upgrade-439215/client.key
	I1212 01:07:12.234874  203848 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/kubernetes-upgrade-439215/apiserver.key.ca4b53bb
	I1212 01:07:12.234964  203848 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/kubernetes-upgrade-439215/proxy-client.key
	I1212 01:07:12.235153  203848 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem (1338 bytes)
	W1212 01:07:12.235216  203848 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290_empty.pem, impossibly tiny 0 bytes
	I1212 01:07:12.235242  203848 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 01:07:12.235302  203848 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem (1082 bytes)
	I1212 01:07:12.235349  203848 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:07:12.235404  203848 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem (1675 bytes)
	I1212 01:07:12.235487  203848 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem (1708 bytes)
	I1212 01:07:12.236158  203848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:07:12.263137  203848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 01:07:12.282489  203848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:07:12.302433  203848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:07:12.320201  203848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/kubernetes-upgrade-439215/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1212 01:07:12.337322  203848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/kubernetes-upgrade-439215/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 01:07:12.355929  203848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/kubernetes-upgrade-439215/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:07:12.373710  203848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/kubernetes-upgrade-439215/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 01:07:12.390869  203848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /usr/share/ca-certificates/42902.pem (1708 bytes)
	I1212 01:07:12.409713  203848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:07:12.427980  203848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem --> /usr/share/ca-certificates/4290.pem (1338 bytes)
	I1212 01:07:12.445421  203848 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:07:12.457613  203848 ssh_runner.go:195] Run: openssl version
	I1212 01:07:12.463908  203848 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4290.pem
	I1212 01:07:12.471239  203848 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4290.pem /etc/ssl/certs/4290.pem
	I1212 01:07:12.478765  203848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4290.pem
	I1212 01:07:12.482527  203848 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:06 /usr/share/ca-certificates/4290.pem
	I1212 01:07:12.482597  203848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4290.pem
	I1212 01:07:12.523891  203848 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 01:07:12.531322  203848 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42902.pem
	I1212 01:07:12.538499  203848 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42902.pem /etc/ssl/certs/42902.pem
	I1212 01:07:12.546188  203848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42902.pem
	I1212 01:07:12.549845  203848 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:06 /usr/share/ca-certificates/42902.pem
	I1212 01:07:12.549912  203848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42902.pem
	I1212 01:07:12.590714  203848 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 01:07:12.598072  203848 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:07:12.605352  203848 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 01:07:12.612881  203848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:07:12.616516  203848 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:07:12.616638  203848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:07:12.657189  203848 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 01:07:12.664773  203848 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:07:12.668542  203848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 01:07:12.709138  203848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 01:07:12.750173  203848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 01:07:12.795218  203848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 01:07:12.841132  203848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 01:07:12.882254  203848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 01:07:12.932549  203848 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-439215 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-439215 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:07:12.932656  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1212 01:07:12.932794  203848 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:07:12.960150  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:07:12.960176  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:07:12.960182  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:07:12.960186  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:07:12.960189  203848 cri.go:89] found id: ""
	I1212 01:07:12.960246  203848 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W1212 01:07:12.983079  203848 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T01:07:12Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1212 01:07:12.983148  203848 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:07:12.991552  203848 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 01:07:12.991569  203848 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 01:07:12.991619  203848 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 01:07:12.999529  203848 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 01:07:13.000153  203848 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-439215" does not appear in /home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 01:07:13.000392  203848 kubeconfig.go:62] /home/jenkins/minikube-integration/22101-2343/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-439215" cluster setting kubeconfig missing "kubernetes-upgrade-439215" context setting]
	I1212 01:07:13.000832  203848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/kubeconfig: {Name:mk3e2cbffa089f0a26351f5f6a49b8ed3b66edd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:07:13.001469  203848 kapi.go:59] client config for kubernetes-upgrade-439215: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22101-2343/.minikube/profiles/kubernetes-upgrade-439215/client.crt", KeyFile:"/home/jenkins/minikube-integration/22101-2343/.minikube/profiles/kubernetes-upgrade-439215/client.key", CAFile:"/home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 01:07:13.001989  203848 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 01:07:13.002007  203848 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 01:07:13.002013  203848 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 01:07:13.002017  203848 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 01:07:13.002021  203848 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 01:07:13.002289  203848 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 01:07:13.015917  203848 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-12 01:06:30.909826955 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-12 01:07:12.066034721 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///run/containerd/containerd.sock
	   name: "kubernetes-upgrade-439215"
	   kubeletExtraArgs:
	-    node-ip: 192.168.76.2
	+    - name: "node-ip"
	+      value: "192.168.76.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-beta.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1212 01:07:13.015936  203848 kubeadm.go:1161] stopping kube-system containers ...
	I1212 01:07:13.015949  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1212 01:07:13.016006  203848 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:07:13.048264  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:07:13.048286  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:07:13.048292  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:07:13.048296  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:07:13.048307  203848 cri.go:89] found id: ""
	I1212 01:07:13.048346  203848 cri.go:252] Stopping containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:07:13.048431  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:07:13.052974  203848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3
	I1212 01:07:13.096052  203848 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 01:07:13.112086  203848 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:07:13.119940  203848 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5639 Dec 12 01:06 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Dec 12 01:06 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec 12 01:06 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Dec 12 01:06 /etc/kubernetes/scheduler.conf
	
	I1212 01:07:13.120049  203848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:07:13.128622  203848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:07:13.136566  203848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:07:13.144516  203848 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 01:07:13.144580  203848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:07:13.152701  203848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:07:13.160427  203848 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 01:07:13.160491  203848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:07:13.167922  203848 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:07:13.175636  203848 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:07:13.219759  203848 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:07:15.127351  203848 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.907557264s)
	I1212 01:07:15.127423  203848 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:07:15.343542  203848 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:07:15.410810  203848 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:07:15.459900  203848 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:07:15.460003  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:15.961086  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:16.460185  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:16.960254  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:17.460126  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:17.961039  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:18.460566  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:18.960175  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:19.460098  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:19.960599  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:20.460617  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:20.961058  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:21.460137  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:21.960812  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:22.460287  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:22.961145  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:23.460157  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:23.960883  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:24.460106  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:24.960323  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:25.460757  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:25.960122  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:26.460171  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:26.960962  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:27.460750  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:27.960750  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:28.460468  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:28.960064  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:29.460104  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:29.960283  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:30.460368  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:30.960963  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:31.460441  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:31.960485  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:32.460133  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:32.960635  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:33.460966  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:33.960915  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:34.460114  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:34.960120  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:35.461086  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:35.960106  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:36.460157  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:36.960533  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:37.461025  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:37.960949  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:38.460174  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:38.960386  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:39.460411  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:39.960867  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:40.460690  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:40.960696  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:41.460896  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:41.960366  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:42.461014  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:42.961129  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:43.460793  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:43.960958  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:44.460669  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:44.960770  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:45.460202  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:45.960995  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:46.460920  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:46.960083  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:47.460239  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:47.960830  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:48.460638  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:48.960574  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:49.460931  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:49.960511  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:50.461067  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:50.960119  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:51.460948  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:51.960610  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:52.460091  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:52.960971  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:53.460661  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:53.960432  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:54.460132  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:54.960734  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:55.460129  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:55.960398  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:56.460906  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:56.960143  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:57.461050  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:57.960935  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:58.460029  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:58.960991  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:59.460120  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:59.960802  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:00.460194  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:00.960138  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:01.460135  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:01.960079  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:02.461029  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:02.960549  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:03.460464  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:03.961632  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:04.460725  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:04.960963  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:05.460923  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:05.960189  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:06.460680  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:06.960821  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:07.460519  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:07.960100  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:08.460130  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:08.961050  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:09.460136  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:09.960094  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:10.460125  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:10.960116  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:11.460138  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:11.960835  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:12.460129  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:12.960105  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:13.460889  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:13.960989  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:14.460726  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:14.960853  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:15.460080  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:08:15.460188  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:08:15.502339  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:08:15.502363  203848 cri.go:89] found id: ""
	I1212 01:08:15.502370  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:08:15.502427  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:15.507089  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:08:15.507159  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:08:15.545286  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:08:15.545306  203848 cri.go:89] found id: ""
	I1212 01:08:15.545324  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:08:15.545379  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:15.549746  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:08:15.549825  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:08:15.593540  203848 cri.go:89] found id: ""
	I1212 01:08:15.593562  203848 logs.go:282] 0 containers: []
	W1212 01:08:15.593580  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:08:15.593587  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:08:15.593657  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:08:15.655393  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:08:15.655413  203848 cri.go:89] found id: ""
	I1212 01:08:15.655422  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:08:15.655477  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:15.660097  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:08:15.660171  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:08:15.685850  203848 cri.go:89] found id: ""
	I1212 01:08:15.685871  203848 logs.go:282] 0 containers: []
	W1212 01:08:15.685880  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:08:15.685886  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:08:15.685943  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:08:15.713347  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:08:15.713365  203848 cri.go:89] found id: ""
	I1212 01:08:15.713374  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:08:15.713429  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:15.717641  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:08:15.717706  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:08:15.749787  203848 cri.go:89] found id: ""
	I1212 01:08:15.749809  203848 logs.go:282] 0 containers: []
	W1212 01:08:15.749817  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:08:15.749824  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:08:15.749880  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:08:15.797510  203848 cri.go:89] found id: ""
	I1212 01:08:15.797580  203848 logs.go:282] 0 containers: []
	W1212 01:08:15.797603  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:08:15.797632  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:08:15.797669  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:08:15.863888  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:08:15.863931  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:08:15.899402  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:08:15.899439  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:08:15.936396  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:08:15.936429  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:08:15.982914  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:08:15.982942  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:08:15.997249  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:08:15.997277  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:08:16.041926  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:08:16.041956  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:08:16.089393  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:08:16.089427  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:08:16.174887  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:08:16.174935  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:08:16.274193  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:08:18.775108  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:18.788129  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:08:18.788217  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:08:18.819963  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:08:18.819989  203848 cri.go:89] found id: ""
	I1212 01:08:18.819998  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:08:18.820061  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:18.824753  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:08:18.824826  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:08:18.857250  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:08:18.857275  203848 cri.go:89] found id: ""
	I1212 01:08:18.857283  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:08:18.857339  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:18.861559  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:08:18.861633  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:08:18.891391  203848 cri.go:89] found id: ""
	I1212 01:08:18.891421  203848 logs.go:282] 0 containers: []
	W1212 01:08:18.891431  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:08:18.891439  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:08:18.891519  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:08:18.930934  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:08:18.930963  203848 cri.go:89] found id: ""
	I1212 01:08:18.930987  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:08:18.931064  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:18.935282  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:08:18.935368  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:08:18.969479  203848 cri.go:89] found id: ""
	I1212 01:08:18.969511  203848 logs.go:282] 0 containers: []
	W1212 01:08:18.969521  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:08:18.969528  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:08:18.969601  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:08:19.001453  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:08:19.001481  203848 cri.go:89] found id: ""
	I1212 01:08:19.001489  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:08:19.001555  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:19.006660  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:08:19.006772  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:08:19.048490  203848 cri.go:89] found id: ""
	I1212 01:08:19.048517  203848 logs.go:282] 0 containers: []
	W1212 01:08:19.048527  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:08:19.048552  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:08:19.048623  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:08:19.085439  203848 cri.go:89] found id: ""
	I1212 01:08:19.085469  203848 logs.go:282] 0 containers: []
	W1212 01:08:19.085480  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:08:19.085496  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:08:19.085512  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:08:19.122517  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:08:19.122551  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:08:19.159680  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:08:19.159752  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:08:19.191109  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:08:19.191181  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:08:19.271243  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:08:19.271312  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:08:19.271353  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:08:19.352412  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:08:19.352491  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:08:19.417348  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:08:19.417395  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:08:19.482354  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:08:19.482384  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:08:19.558124  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:08:19.558169  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:08:22.080363  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:22.093222  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:08:22.093294  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:08:22.129292  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:08:22.129311  203848 cri.go:89] found id: ""
	I1212 01:08:22.129320  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:08:22.129380  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:22.134032  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:08:22.134112  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:08:22.181324  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:08:22.181344  203848 cri.go:89] found id: ""
	I1212 01:08:22.181352  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:08:22.181416  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:22.185858  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:08:22.185981  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:08:22.217838  203848 cri.go:89] found id: ""
	I1212 01:08:22.217926  203848 logs.go:282] 0 containers: []
	W1212 01:08:22.217950  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:08:22.217977  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:08:22.218099  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:08:22.263095  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:08:22.263158  203848 cri.go:89] found id: ""
	I1212 01:08:22.263187  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:08:22.263281  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:22.268367  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:08:22.268492  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:08:22.298890  203848 cri.go:89] found id: ""
	I1212 01:08:22.298963  203848 logs.go:282] 0 containers: []
	W1212 01:08:22.298987  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:08:22.299047  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:08:22.299139  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:08:22.350461  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:08:22.350531  203848 cri.go:89] found id: ""
	I1212 01:08:22.350553  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:08:22.350645  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:22.355491  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:08:22.355565  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:08:22.384808  203848 cri.go:89] found id: ""
	I1212 01:08:22.384830  203848 logs.go:282] 0 containers: []
	W1212 01:08:22.384838  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:08:22.384845  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:08:22.384912  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:08:22.421340  203848 cri.go:89] found id: ""
	I1212 01:08:22.421361  203848 logs.go:282] 0 containers: []
	W1212 01:08:22.421370  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:08:22.421385  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:08:22.421397  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:08:22.484262  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:08:22.484345  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:08:22.555409  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:08:22.555479  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:08:22.585081  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:08:22.585119  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:08:22.623259  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:08:22.623337  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:08:22.686389  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:08:22.686425  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:08:22.699810  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:08:22.699836  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:08:22.767463  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:08:22.767525  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:08:22.767550  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:08:22.802536  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:08:22.802568  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:08:25.331127  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:25.343349  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:08:25.343425  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:08:25.375256  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:08:25.375281  203848 cri.go:89] found id: ""
	I1212 01:08:25.375289  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:08:25.375348  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:25.383121  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:08:25.383200  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:08:25.447629  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:08:25.447654  203848 cri.go:89] found id: ""
	I1212 01:08:25.447663  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:08:25.447719  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:25.451799  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:08:25.451882  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:08:25.500301  203848 cri.go:89] found id: ""
	I1212 01:08:25.500328  203848 logs.go:282] 0 containers: []
	W1212 01:08:25.500336  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:08:25.500343  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:08:25.500409  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:08:25.548369  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:08:25.548393  203848 cri.go:89] found id: ""
	I1212 01:08:25.548402  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:08:25.548458  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:25.564513  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:08:25.564597  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:08:25.621365  203848 cri.go:89] found id: ""
	I1212 01:08:25.621389  203848 logs.go:282] 0 containers: []
	W1212 01:08:25.621398  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:08:25.621404  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:08:25.621472  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:08:25.672963  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:08:25.672987  203848 cri.go:89] found id: ""
	I1212 01:08:25.672996  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:08:25.673057  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:25.681596  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:08:25.681678  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:08:25.737228  203848 cri.go:89] found id: ""
	I1212 01:08:25.737255  203848 logs.go:282] 0 containers: []
	W1212 01:08:25.737277  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:08:25.737287  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:08:25.737350  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:08:25.777677  203848 cri.go:89] found id: ""
	I1212 01:08:25.777708  203848 logs.go:282] 0 containers: []
	W1212 01:08:25.777718  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:08:25.777731  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:08:25.777744  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:08:25.845238  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:08:25.845272  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:08:25.890610  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:08:25.890644  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:08:26.028291  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:08:26.028309  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:08:26.028322  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:08:26.080406  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:08:26.080441  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:08:26.147807  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:08:26.147842  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:08:26.214803  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:08:26.214868  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:08:26.305378  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:08:26.305454  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:08:26.325948  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:08:26.325975  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:08:28.868259  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:28.879299  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:08:28.879375  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:08:28.909718  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:08:28.909744  203848 cri.go:89] found id: ""
	I1212 01:08:28.909753  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:08:28.909811  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:28.914043  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:08:28.914134  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:08:28.939682  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:08:28.939709  203848 cri.go:89] found id: ""
	I1212 01:08:28.939716  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:08:28.939791  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:28.943692  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:08:28.943773  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:08:28.985226  203848 cri.go:89] found id: ""
	I1212 01:08:28.985291  203848 logs.go:282] 0 containers: []
	W1212 01:08:28.985313  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:08:28.985331  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:08:28.985424  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:08:29.036349  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:08:29.036413  203848 cri.go:89] found id: ""
	I1212 01:08:29.036434  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:08:29.036529  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:29.041265  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:08:29.041382  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:08:29.082517  203848 cri.go:89] found id: ""
	I1212 01:08:29.082582  203848 logs.go:282] 0 containers: []
	W1212 01:08:29.082607  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:08:29.082626  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:08:29.082715  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:08:29.109924  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:08:29.109986  203848 cri.go:89] found id: ""
	I1212 01:08:29.110008  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:08:29.110114  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:29.114104  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:08:29.114218  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:08:29.147798  203848 cri.go:89] found id: ""
	I1212 01:08:29.147872  203848 logs.go:282] 0 containers: []
	W1212 01:08:29.147895  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:08:29.147916  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:08:29.148014  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:08:29.176885  203848 cri.go:89] found id: ""
	I1212 01:08:29.176951  203848 logs.go:282] 0 containers: []
	W1212 01:08:29.176972  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:08:29.176997  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:08:29.177067  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:08:29.221293  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:08:29.221322  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:08:29.283554  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:08:29.283589  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:08:29.382521  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:08:29.382539  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:08:29.382552  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:08:29.424434  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:08:29.424469  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:08:29.474691  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:08:29.474725  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:08:29.508021  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:08:29.508057  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:08:29.591182  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:08:29.591262  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:08:29.607323  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:08:29.607398  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:08:32.147141  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:32.158195  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:08:32.158267  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:08:32.186450  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:08:32.186474  203848 cri.go:89] found id: ""
	I1212 01:08:32.186483  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:08:32.186548  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:32.190766  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:08:32.190841  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:08:32.222751  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:08:32.222775  203848 cri.go:89] found id: ""
	I1212 01:08:32.222783  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:08:32.222840  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:32.226870  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:08:32.226950  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:08:32.255904  203848 cri.go:89] found id: ""
	I1212 01:08:32.255929  203848 logs.go:282] 0 containers: []
	W1212 01:08:32.255938  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:08:32.255944  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:08:32.256005  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:08:32.309981  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:08:32.310013  203848 cri.go:89] found id: ""
	I1212 01:08:32.310022  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:08:32.310095  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:32.314253  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:08:32.314335  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:08:32.354247  203848 cri.go:89] found id: ""
	I1212 01:08:32.354272  203848 logs.go:282] 0 containers: []
	W1212 01:08:32.354287  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:08:32.354294  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:08:32.354353  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:08:32.381576  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:08:32.381599  203848 cri.go:89] found id: ""
	I1212 01:08:32.381607  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:08:32.381661  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:32.385945  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:08:32.386022  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:08:32.416342  203848 cri.go:89] found id: ""
	I1212 01:08:32.416363  203848 logs.go:282] 0 containers: []
	W1212 01:08:32.416372  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:08:32.416378  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:08:32.416447  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:08:32.464105  203848 cri.go:89] found id: ""
	I1212 01:08:32.464132  203848 logs.go:282] 0 containers: []
	W1212 01:08:32.464141  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:08:32.464160  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:08:32.464172  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:08:32.477079  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:08:32.477108  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:08:32.601865  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:08:32.601890  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:08:32.601903  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:08:32.666327  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:08:32.666362  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:08:32.715182  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:08:32.715221  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:08:32.752829  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:08:32.752863  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:08:32.815500  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:08:32.815534  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:08:32.870113  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:08:32.870281  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:08:32.904907  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:08:32.904937  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:08:35.440427  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:35.450448  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:08:35.450519  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:08:35.476181  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:08:35.476205  203848 cri.go:89] found id: ""
	I1212 01:08:35.476213  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:08:35.476274  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:35.480067  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:08:35.480144  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:08:35.503929  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:08:35.503952  203848 cri.go:89] found id: ""
	I1212 01:08:35.503961  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:08:35.504019  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:35.507732  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:08:35.507813  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:08:35.534279  203848 cri.go:89] found id: ""
	I1212 01:08:35.534301  203848 logs.go:282] 0 containers: []
	W1212 01:08:35.534310  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:08:35.534316  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:08:35.534374  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:08:35.566746  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:08:35.566769  203848 cri.go:89] found id: ""
	I1212 01:08:35.566778  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:08:35.566833  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:35.570714  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:08:35.570780  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:08:35.608781  203848 cri.go:89] found id: ""
	I1212 01:08:35.608804  203848 logs.go:282] 0 containers: []
	W1212 01:08:35.608812  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:08:35.608818  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:08:35.608886  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:08:35.637273  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:08:35.637292  203848 cri.go:89] found id: ""
	I1212 01:08:35.637300  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:08:35.637359  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:35.640957  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:08:35.641032  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:08:35.669661  203848 cri.go:89] found id: ""
	I1212 01:08:35.669696  203848 logs.go:282] 0 containers: []
	W1212 01:08:35.669705  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:08:35.669712  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:08:35.669788  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:08:35.695302  203848 cri.go:89] found id: ""
	I1212 01:08:35.695325  203848 logs.go:282] 0 containers: []
	W1212 01:08:35.695334  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:08:35.695348  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:08:35.695359  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:08:35.722311  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:08:35.722344  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:08:35.781466  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:08:35.781503  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:08:35.793941  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:08:35.793974  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:08:35.857664  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:08:35.857688  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:08:35.857702  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:08:35.899456  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:08:35.899488  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:08:35.934688  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:08:35.934719  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:08:35.961227  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:08:35.961256  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:08:35.993351  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:08:35.993384  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:08:38.524805  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:38.543882  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:08:38.543997  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:08:38.591227  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:08:38.591260  203848 cri.go:89] found id: ""
	I1212 01:08:38.591269  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:08:38.591326  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:38.595619  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:08:38.595694  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:08:38.634765  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:08:38.634795  203848 cri.go:89] found id: ""
	I1212 01:08:38.634803  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:08:38.634876  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:38.639221  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:08:38.639322  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:08:38.678642  203848 cri.go:89] found id: ""
	I1212 01:08:38.678665  203848 logs.go:282] 0 containers: []
	W1212 01:08:38.678674  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:08:38.678680  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:08:38.678742  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:08:38.705533  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:08:38.705556  203848 cri.go:89] found id: ""
	I1212 01:08:38.705565  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:08:38.705634  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:38.709506  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:08:38.709576  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:08:38.739224  203848 cri.go:89] found id: ""
	I1212 01:08:38.739297  203848 logs.go:282] 0 containers: []
	W1212 01:08:38.739322  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:08:38.739340  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:08:38.739423  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:08:38.774194  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:08:38.774212  203848 cri.go:89] found id: ""
	I1212 01:08:38.774221  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:08:38.774282  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:38.778683  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:08:38.778746  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:08:38.817298  203848 cri.go:89] found id: ""
	I1212 01:08:38.817320  203848 logs.go:282] 0 containers: []
	W1212 01:08:38.817328  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:08:38.817334  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:08:38.817397  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:08:38.850335  203848 cri.go:89] found id: ""
	I1212 01:08:38.850356  203848 logs.go:282] 0 containers: []
	W1212 01:08:38.850364  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:08:38.850377  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:08:38.850394  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:08:38.913126  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:08:38.913155  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:08:38.963102  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:08:38.963132  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:08:39.002192  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:08:39.002225  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:08:39.054747  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:08:39.054777  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:08:39.085590  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:08:39.085622  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:08:39.149594  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:08:39.149667  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:08:39.167536  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:08:39.167559  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:08:39.253562  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:08:39.253582  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:08:39.253594  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:08:41.837324  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:41.849792  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:08:41.849888  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:08:41.881843  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:08:41.881863  203848 cri.go:89] found id: ""
	I1212 01:08:41.881872  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:08:41.881943  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:41.886596  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:08:41.886675  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:08:41.917571  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:08:41.917595  203848 cri.go:89] found id: ""
	I1212 01:08:41.917654  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:08:41.917742  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:41.922136  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:08:41.922208  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:08:41.951932  203848 cri.go:89] found id: ""
	I1212 01:08:41.951953  203848 logs.go:282] 0 containers: []
	W1212 01:08:41.951963  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:08:41.951969  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:08:41.952032  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:08:41.979116  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:08:41.979134  203848 cri.go:89] found id: ""
	I1212 01:08:41.979142  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:08:41.979199  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:41.983511  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:08:41.983580  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:08:42.025764  203848 cri.go:89] found id: ""
	I1212 01:08:42.025787  203848 logs.go:282] 0 containers: []
	W1212 01:08:42.025796  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:08:42.025803  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:08:42.025870  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:08:42.065428  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:08:42.065452  203848 cri.go:89] found id: ""
	I1212 01:08:42.065461  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:08:42.065550  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:42.071718  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:08:42.071832  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:08:42.130398  203848 cri.go:89] found id: ""
	I1212 01:08:42.130502  203848 logs.go:282] 0 containers: []
	W1212 01:08:42.130531  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:08:42.130553  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:08:42.130654  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:08:42.184271  203848 cri.go:89] found id: ""
	I1212 01:08:42.184366  203848 logs.go:282] 0 containers: []
	W1212 01:08:42.184426  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:08:42.184470  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:08:42.184514  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:08:42.249076  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:08:42.249164  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:08:42.340404  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:08:42.340488  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:08:42.384754  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:08:42.384830  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:08:42.411938  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:08:42.411969  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:08:42.458170  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:08:42.458249  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:08:42.499158  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:08:42.499242  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:08:42.539318  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:08:42.539395  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:08:42.603675  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:08:42.603754  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:08:42.684112  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:08:45.184400  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:45.202214  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:08:45.202350  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:08:45.252775  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:08:45.252806  203848 cri.go:89] found id: ""
	I1212 01:08:45.252816  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:08:45.252880  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:45.258573  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:08:45.258661  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:08:45.328613  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:08:45.328636  203848 cri.go:89] found id: ""
	I1212 01:08:45.328643  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:08:45.328700  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:45.338723  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:08:45.338800  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:08:45.421687  203848 cri.go:89] found id: ""
	I1212 01:08:45.421763  203848 logs.go:282] 0 containers: []
	W1212 01:08:45.421787  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:08:45.421807  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:08:45.421893  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:08:45.455544  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:08:45.455606  203848 cri.go:89] found id: ""
	I1212 01:08:45.455638  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:08:45.455721  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:45.460612  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:08:45.460757  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:08:45.496586  203848 cri.go:89] found id: ""
	I1212 01:08:45.496661  203848 logs.go:282] 0 containers: []
	W1212 01:08:45.496687  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:08:45.496715  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:08:45.496814  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:08:45.545579  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:08:45.545652  203848 cri.go:89] found id: ""
	I1212 01:08:45.545677  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:08:45.545777  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:45.550657  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:08:45.550805  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:08:45.596169  203848 cri.go:89] found id: ""
	I1212 01:08:45.596244  203848 logs.go:282] 0 containers: []
	W1212 01:08:45.596268  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:08:45.596288  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:08:45.596383  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:08:45.649403  203848 cri.go:89] found id: ""
	I1212 01:08:45.649486  203848 logs.go:282] 0 containers: []
	W1212 01:08:45.649509  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:08:45.649536  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:08:45.649569  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:08:45.778082  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:08:45.778151  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:08:45.778178  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:08:45.838574  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:08:45.838606  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:08:45.903860  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:08:45.903890  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:08:45.951157  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:08:45.951225  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:08:46.038613  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:08:46.038647  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:08:46.118477  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:08:46.118515  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:08:46.134825  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:08:46.134852  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:08:46.203292  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:08:46.203326  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:08:48.761855  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:48.772107  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:08:48.772176  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:08:48.796708  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:08:48.796743  203848 cri.go:89] found id: ""
	I1212 01:08:48.796755  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:08:48.796813  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:48.801219  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:08:48.801324  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:08:48.829339  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:08:48.829363  203848 cri.go:89] found id: ""
	I1212 01:08:48.829372  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:08:48.829432  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:48.833237  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:08:48.833316  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:08:48.859200  203848 cri.go:89] found id: ""
	I1212 01:08:48.859226  203848 logs.go:282] 0 containers: []
	W1212 01:08:48.859235  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:08:48.859242  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:08:48.859307  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:08:48.884191  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:08:48.884215  203848 cri.go:89] found id: ""
	I1212 01:08:48.884234  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:08:48.884295  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:48.888246  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:08:48.888341  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:08:48.912722  203848 cri.go:89] found id: ""
	I1212 01:08:48.912748  203848 logs.go:282] 0 containers: []
	W1212 01:08:48.912758  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:08:48.912764  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:08:48.912829  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:08:48.938072  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:08:48.938097  203848 cri.go:89] found id: ""
	I1212 01:08:48.938106  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:08:48.938165  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:48.942270  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:08:48.942345  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:08:48.971823  203848 cri.go:89] found id: ""
	I1212 01:08:48.971848  203848 logs.go:282] 0 containers: []
	W1212 01:08:48.971857  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:08:48.971864  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:08:48.971929  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:08:48.997165  203848 cri.go:89] found id: ""
	I1212 01:08:48.997191  203848 logs.go:282] 0 containers: []
	W1212 01:08:48.997200  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:08:48.997215  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:08:48.997253  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:08:49.057085  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:08:49.057120  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:08:49.070225  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:08:49.070257  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:08:49.131117  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:08:49.131183  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:08:49.131210  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:08:49.165529  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:08:49.165558  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:08:49.203215  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:08:49.203245  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:08:49.233433  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:08:49.233471  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:08:49.261930  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:08:49.261961  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:08:49.303058  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:08:49.303093  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:08:51.835571  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:51.845810  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:08:51.845883  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:08:51.873454  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:08:51.873479  203848 cri.go:89] found id: ""
	I1212 01:08:51.873487  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:08:51.873546  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:51.877327  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:08:51.877409  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:08:51.902807  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:08:51.902871  203848 cri.go:89] found id: ""
	I1212 01:08:51.902893  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:08:51.902983  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:51.906837  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:08:51.906916  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:08:51.933273  203848 cri.go:89] found id: ""
	I1212 01:08:51.933351  203848 logs.go:282] 0 containers: []
	W1212 01:08:51.933374  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:08:51.933395  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:08:51.933492  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:08:51.963750  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:08:51.963818  203848 cri.go:89] found id: ""
	I1212 01:08:51.963842  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:08:51.963931  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:51.967826  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:08:51.967928  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:08:51.992004  203848 cri.go:89] found id: ""
	I1212 01:08:51.992027  203848 logs.go:282] 0 containers: []
	W1212 01:08:51.992036  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:08:51.992042  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:08:51.992106  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:08:52.023740  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:08:52.023818  203848 cri.go:89] found id: ""
	I1212 01:08:52.023834  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:08:52.023903  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:52.028281  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:08:52.028375  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:08:52.058235  203848 cri.go:89] found id: ""
	I1212 01:08:52.058266  203848 logs.go:282] 0 containers: []
	W1212 01:08:52.058276  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:08:52.058282  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:08:52.058419  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:08:52.084362  203848 cri.go:89] found id: ""
	I1212 01:08:52.084388  203848 logs.go:282] 0 containers: []
	W1212 01:08:52.084397  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:08:52.084411  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:08:52.084442  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:08:52.120844  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:08:52.120878  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:08:52.150313  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:08:52.150349  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:08:52.179151  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:08:52.179177  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:08:52.207142  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:08:52.207170  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:08:52.265009  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:08:52.265047  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:08:52.277829  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:08:52.277858  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:08:52.366072  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:08:52.366092  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:08:52.366129  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:08:52.400791  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:08:52.400821  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:08:54.932576  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:54.942803  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:08:54.942877  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:08:54.968696  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:08:54.968718  203848 cri.go:89] found id: ""
	I1212 01:08:54.968727  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:08:54.968786  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:54.972549  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:08:54.972628  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:08:54.997161  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:08:54.997182  203848 cri.go:89] found id: ""
	I1212 01:08:54.997190  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:08:54.997248  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:55.000896  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:08:55.000975  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:08:55.038458  203848 cri.go:89] found id: ""
	I1212 01:08:55.038484  203848 logs.go:282] 0 containers: []
	W1212 01:08:55.038493  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:08:55.038499  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:08:55.038563  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:08:55.063393  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:08:55.063416  203848 cri.go:89] found id: ""
	I1212 01:08:55.063426  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:08:55.063484  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:55.067352  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:08:55.067428  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:08:55.093038  203848 cri.go:89] found id: ""
	I1212 01:08:55.093068  203848 logs.go:282] 0 containers: []
	W1212 01:08:55.093078  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:08:55.093085  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:08:55.093152  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:08:55.119360  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:08:55.119388  203848 cri.go:89] found id: ""
	I1212 01:08:55.119432  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:08:55.119514  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:55.123323  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:08:55.123409  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:08:55.152829  203848 cri.go:89] found id: ""
	I1212 01:08:55.152855  203848 logs.go:282] 0 containers: []
	W1212 01:08:55.152867  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:08:55.152875  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:08:55.152937  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:08:55.178675  203848 cri.go:89] found id: ""
	I1212 01:08:55.178701  203848 logs.go:282] 0 containers: []
	W1212 01:08:55.178710  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:08:55.178725  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:08:55.178739  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:08:55.226297  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:08:55.226328  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:08:55.257127  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:08:55.257156  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:08:55.290694  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:08:55.290723  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:08:55.324919  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:08:55.324994  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:08:55.390416  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:08:55.390452  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:08:55.403194  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:08:55.403221  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:08:55.464092  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:08:55.464114  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:08:55.464126  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:08:55.500221  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:08:55.500249  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:08:58.030624  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:58.041144  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:08:58.041222  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:08:58.069147  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:08:58.069170  203848 cri.go:89] found id: ""
	I1212 01:08:58.069180  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:08:58.069239  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:58.073262  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:08:58.073339  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:08:58.098589  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:08:58.098612  203848 cri.go:89] found id: ""
	I1212 01:08:58.098620  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:08:58.098679  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:58.103034  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:08:58.103121  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:08:58.129458  203848 cri.go:89] found id: ""
	I1212 01:08:58.129482  203848 logs.go:282] 0 containers: []
	W1212 01:08:58.129491  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:08:58.129498  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:08:58.129563  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:08:58.155263  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:08:58.155286  203848 cri.go:89] found id: ""
	I1212 01:08:58.155294  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:08:58.155361  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:58.159195  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:08:58.159280  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:08:58.184315  203848 cri.go:89] found id: ""
	I1212 01:08:58.184339  203848 logs.go:282] 0 containers: []
	W1212 01:08:58.184348  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:08:58.184355  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:08:58.184433  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:08:58.209975  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:08:58.210045  203848 cri.go:89] found id: ""
	I1212 01:08:58.210066  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:08:58.210155  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:08:58.213904  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:08:58.214059  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:08:58.239376  203848 cri.go:89] found id: ""
	I1212 01:08:58.239403  203848 logs.go:282] 0 containers: []
	W1212 01:08:58.239411  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:08:58.239417  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:08:58.239477  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:08:58.264348  203848 cri.go:89] found id: ""
	I1212 01:08:58.264372  203848 logs.go:282] 0 containers: []
	W1212 01:08:58.264381  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:08:58.264395  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:08:58.264438  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:08:58.336695  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:08:58.336732  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:08:58.351635  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:08:58.351665  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:08:58.389929  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:08:58.389966  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:08:58.417061  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:08:58.417089  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:08:58.446691  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:08:58.446719  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:08:58.518987  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:08:58.519018  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:08:58.519030  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:08:58.551356  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:08:58.551391  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:08:58.583361  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:08:58.583392  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:09:01.112218  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:09:01.122649  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:09:01.122725  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:09:01.149797  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:09:01.149820  203848 cri.go:89] found id: ""
	I1212 01:09:01.149828  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:09:01.149887  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:01.153827  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:09:01.153901  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:09:01.180279  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:09:01.180303  203848 cri.go:89] found id: ""
	I1212 01:09:01.180312  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:09:01.180372  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:01.184629  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:09:01.184711  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:09:01.211452  203848 cri.go:89] found id: ""
	I1212 01:09:01.211519  203848 logs.go:282] 0 containers: []
	W1212 01:09:01.211541  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:09:01.211560  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:09:01.211655  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:09:01.238817  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:09:01.238840  203848 cri.go:89] found id: ""
	I1212 01:09:01.238849  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:09:01.238909  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:01.243206  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:09:01.243286  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:09:01.268440  203848 cri.go:89] found id: ""
	I1212 01:09:01.268466  203848 logs.go:282] 0 containers: []
	W1212 01:09:01.268474  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:09:01.268480  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:09:01.268545  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:09:01.317796  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:09:01.317817  203848 cri.go:89] found id: ""
	I1212 01:09:01.317825  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:09:01.317884  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:01.327567  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:09:01.327641  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:09:01.353431  203848 cri.go:89] found id: ""
	I1212 01:09:01.353455  203848 logs.go:282] 0 containers: []
	W1212 01:09:01.353464  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:09:01.353470  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:09:01.353526  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:09:01.383510  203848 cri.go:89] found id: ""
	I1212 01:09:01.383535  203848 logs.go:282] 0 containers: []
	W1212 01:09:01.383544  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:09:01.383558  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:09:01.383601  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:09:01.441801  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:09:01.441837  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:09:01.508307  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:09:01.508328  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:09:01.508409  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:09:01.556150  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:09:01.556185  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:09:01.588117  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:09:01.588152  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:09:01.616305  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:09:01.616335  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:09:01.655853  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:09:01.655883  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:09:01.684331  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:09:01.684359  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:09:01.697333  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:09:01.697361  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:09:04.228362  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:09:04.241141  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:09:04.241224  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:09:04.268073  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:09:04.268094  203848 cri.go:89] found id: ""
	I1212 01:09:04.268102  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:09:04.268160  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:04.271996  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:09:04.272069  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:09:04.309031  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:09:04.309054  203848 cri.go:89] found id: ""
	I1212 01:09:04.309062  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:09:04.309122  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:04.313642  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:09:04.313718  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:09:04.349491  203848 cri.go:89] found id: ""
	I1212 01:09:04.349520  203848 logs.go:282] 0 containers: []
	W1212 01:09:04.349529  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:09:04.349536  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:09:04.349598  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:09:04.375771  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:09:04.375795  203848 cri.go:89] found id: ""
	I1212 01:09:04.375803  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:09:04.375860  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:04.379588  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:09:04.379662  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:09:04.406123  203848 cri.go:89] found id: ""
	I1212 01:09:04.406143  203848 logs.go:282] 0 containers: []
	W1212 01:09:04.406152  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:09:04.406157  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:09:04.406222  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:09:04.434311  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:09:04.434331  203848 cri.go:89] found id: ""
	I1212 01:09:04.434339  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:09:04.434397  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:04.438221  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:09:04.438334  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:09:04.470266  203848 cri.go:89] found id: ""
	I1212 01:09:04.470292  203848 logs.go:282] 0 containers: []
	W1212 01:09:04.470301  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:09:04.470307  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:09:04.470415  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:09:04.495635  203848 cri.go:89] found id: ""
	I1212 01:09:04.495709  203848 logs.go:282] 0 containers: []
	W1212 01:09:04.495733  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:09:04.495760  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:09:04.495779  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:09:04.527600  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:09:04.527633  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:09:04.566083  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:09:04.566114  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:09:04.595698  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:09:04.595731  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:09:04.625407  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:09:04.625433  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:09:04.687778  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:09:04.687812  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:09:04.734546  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:09:04.734574  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:09:04.762919  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:09:04.762944  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:09:04.775681  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:09:04.775713  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:09:04.844221  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:09:07.345727  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:09:07.355949  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:09:07.356021  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:09:07.380040  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:09:07.380060  203848 cri.go:89] found id: ""
	I1212 01:09:07.380068  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:09:07.380123  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:07.383900  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:09:07.383973  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:09:07.408774  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:09:07.408796  203848 cri.go:89] found id: ""
	I1212 01:09:07.408804  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:09:07.408861  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:07.412694  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:09:07.412772  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:09:07.441500  203848 cri.go:89] found id: ""
	I1212 01:09:07.441571  203848 logs.go:282] 0 containers: []
	W1212 01:09:07.441593  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:09:07.441614  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:09:07.441710  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:09:07.472198  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:09:07.472274  203848 cri.go:89] found id: ""
	I1212 01:09:07.472329  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:09:07.472425  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:07.476523  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:09:07.476606  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:09:07.505870  203848 cri.go:89] found id: ""
	I1212 01:09:07.505895  203848 logs.go:282] 0 containers: []
	W1212 01:09:07.505903  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:09:07.505916  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:09:07.506034  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:09:07.531533  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:09:07.531555  203848 cri.go:89] found id: ""
	I1212 01:09:07.531564  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:09:07.531640  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:07.536356  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:09:07.536455  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:09:07.569391  203848 cri.go:89] found id: ""
	I1212 01:09:07.569415  203848 logs.go:282] 0 containers: []
	W1212 01:09:07.569424  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:09:07.569430  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:09:07.569512  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:09:07.594878  203848 cri.go:89] found id: ""
	I1212 01:09:07.594903  203848 logs.go:282] 0 containers: []
	W1212 01:09:07.594912  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:09:07.594925  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:09:07.594969  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:09:07.663037  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:09:07.663060  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:09:07.663074  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:09:07.710561  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:09:07.710590  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:09:07.738145  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:09:07.738171  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:09:07.784545  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:09:07.784577  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:09:07.820722  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:09:07.820795  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:09:07.855367  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:09:07.855393  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:09:07.915594  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:09:07.915625  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:09:07.928698  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:09:07.928727  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:09:10.461690  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:09:10.471841  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:09:10.471911  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:09:10.497454  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:09:10.497477  203848 cri.go:89] found id: ""
	I1212 01:09:10.497486  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:09:10.497545  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:10.501332  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:09:10.501416  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:09:10.525784  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:09:10.525804  203848 cri.go:89] found id: ""
	I1212 01:09:10.525813  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:09:10.525872  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:10.529677  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:09:10.529749  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:09:10.559385  203848 cri.go:89] found id: ""
	I1212 01:09:10.559410  203848 logs.go:282] 0 containers: []
	W1212 01:09:10.559419  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:09:10.559426  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:09:10.559504  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:09:10.585231  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:09:10.585253  203848 cri.go:89] found id: ""
	I1212 01:09:10.585262  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:09:10.585336  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:10.588953  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:09:10.589071  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:09:10.614340  203848 cri.go:89] found id: ""
	I1212 01:09:10.614365  203848 logs.go:282] 0 containers: []
	W1212 01:09:10.614374  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:09:10.614381  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:09:10.614465  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:09:10.643621  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:09:10.643645  203848 cri.go:89] found id: ""
	I1212 01:09:10.643654  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:09:10.643740  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:10.647760  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:09:10.647836  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:09:10.673734  203848 cri.go:89] found id: ""
	I1212 01:09:10.673810  203848 logs.go:282] 0 containers: []
	W1212 01:09:10.673835  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:09:10.673855  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:09:10.673968  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:09:10.703741  203848 cri.go:89] found id: ""
	I1212 01:09:10.703766  203848 logs.go:282] 0 containers: []
	W1212 01:09:10.703775  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:09:10.703790  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:09:10.703802  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:09:10.716232  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:09:10.716262  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:09:10.755240  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:09:10.755270  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:09:10.789106  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:09:10.789133  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:09:10.822980  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:09:10.823129  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:09:10.850768  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:09:10.850797  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:09:10.908396  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:09:10.908433  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:09:10.981200  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:09:10.981260  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:09:10.981288  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:09:11.020356  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:09:11.020387  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:09:13.553984  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:09:13.564478  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:09:13.564588  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:09:13.588902  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:09:13.588924  203848 cri.go:89] found id: ""
	I1212 01:09:13.588933  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:09:13.589008  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:13.592812  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:09:13.592880  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:09:13.621219  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:09:13.621281  203848 cri.go:89] found id: ""
	I1212 01:09:13.621303  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:09:13.621389  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:13.625145  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:09:13.625226  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:09:13.651209  203848 cri.go:89] found id: ""
	I1212 01:09:13.651234  203848 logs.go:282] 0 containers: []
	W1212 01:09:13.651243  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:09:13.651249  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:09:13.651309  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:09:13.676678  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:09:13.676699  203848 cri.go:89] found id: ""
	I1212 01:09:13.676707  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:09:13.676769  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:13.680400  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:09:13.680519  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:09:13.704693  203848 cri.go:89] found id: ""
	I1212 01:09:13.704715  203848 logs.go:282] 0 containers: []
	W1212 01:09:13.704724  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:09:13.704730  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:09:13.704829  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:09:13.730184  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:09:13.730206  203848 cri.go:89] found id: ""
	I1212 01:09:13.730214  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:09:13.730271  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:13.733910  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:09:13.734034  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:09:13.759484  203848 cri.go:89] found id: ""
	I1212 01:09:13.759507  203848 logs.go:282] 0 containers: []
	W1212 01:09:13.759515  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:09:13.759522  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:09:13.759581  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:09:13.791185  203848 cri.go:89] found id: ""
	I1212 01:09:13.791211  203848 logs.go:282] 0 containers: []
	W1212 01:09:13.791219  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:09:13.791233  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:09:13.791265  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:09:13.848680  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:09:13.848715  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:09:13.861651  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:09:13.861680  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:09:13.893433  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:09:13.893470  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:09:13.921264  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:09:13.921573  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:09:13.953051  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:09:13.953087  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:09:13.981060  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:09:13.981091  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:09:14.049955  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:09:14.049977  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:09:14.049990  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:09:14.101134  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:09:14.101169  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:09:16.632676  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:09:16.643173  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:09:16.643292  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:09:16.668737  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:09:16.668759  203848 cri.go:89] found id: ""
	I1212 01:09:16.668767  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:09:16.668827  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:16.672727  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:09:16.672813  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:09:16.698278  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:09:16.698302  203848 cri.go:89] found id: ""
	I1212 01:09:16.698311  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:09:16.698369  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:16.702095  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:09:16.702173  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:09:16.729914  203848 cri.go:89] found id: ""
	I1212 01:09:16.729941  203848 logs.go:282] 0 containers: []
	W1212 01:09:16.729951  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:09:16.729957  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:09:16.730063  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:09:16.759246  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:09:16.759270  203848 cri.go:89] found id: ""
	I1212 01:09:16.759278  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:09:16.759335  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:16.763267  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:09:16.763339  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:09:16.791737  203848 cri.go:89] found id: ""
	I1212 01:09:16.791761  203848 logs.go:282] 0 containers: []
	W1212 01:09:16.791770  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:09:16.791776  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:09:16.791836  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:09:16.820335  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:09:16.820405  203848 cri.go:89] found id: ""
	I1212 01:09:16.820431  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:09:16.820504  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:16.824315  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:09:16.824396  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:09:16.851467  203848 cri.go:89] found id: ""
	I1212 01:09:16.851493  203848 logs.go:282] 0 containers: []
	W1212 01:09:16.851501  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:09:16.851507  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:09:16.851573  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:09:16.877148  203848 cri.go:89] found id: ""
	I1212 01:09:16.877177  203848 logs.go:282] 0 containers: []
	W1212 01:09:16.877185  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:09:16.877201  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:09:16.877241  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:09:16.942971  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:09:16.943019  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:09:16.943038  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:09:16.976567  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:09:16.976601  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:09:17.010030  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:09:17.010066  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:09:17.048108  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:09:17.048137  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:09:17.113779  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:09:17.113815  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:09:17.126553  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:09:17.126583  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:09:17.158161  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:09:17.158191  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:09:17.186963  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:09:17.186999  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:09:19.715757  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:09:19.725801  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:09:19.725876  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:09:19.753700  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:09:19.753722  203848 cri.go:89] found id: ""
	I1212 01:09:19.753731  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:09:19.753787  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:19.757528  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:09:19.757604  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:09:19.789304  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:09:19.789326  203848 cri.go:89] found id: ""
	I1212 01:09:19.789334  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:09:19.789389  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:19.792984  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:09:19.793052  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:09:19.818212  203848 cri.go:89] found id: ""
	I1212 01:09:19.818238  203848 logs.go:282] 0 containers: []
	W1212 01:09:19.818247  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:09:19.818254  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:09:19.818321  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:09:19.842431  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:09:19.842453  203848 cri.go:89] found id: ""
	I1212 01:09:19.842461  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:09:19.842515  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:19.846192  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:09:19.846265  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:09:19.870914  203848 cri.go:89] found id: ""
	I1212 01:09:19.870938  203848 logs.go:282] 0 containers: []
	W1212 01:09:19.870946  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:09:19.870953  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:09:19.871046  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:09:19.899732  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:09:19.899754  203848 cri.go:89] found id: ""
	I1212 01:09:19.899763  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:09:19.899821  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:19.903482  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:09:19.903551  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:09:19.927969  203848 cri.go:89] found id: ""
	I1212 01:09:19.928000  203848 logs.go:282] 0 containers: []
	W1212 01:09:19.928009  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:09:19.928015  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:09:19.928092  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:09:19.954639  203848 cri.go:89] found id: ""
	I1212 01:09:19.954668  203848 logs.go:282] 0 containers: []
	W1212 01:09:19.954678  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:09:19.954693  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:09:19.954707  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:09:19.988973  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:09:19.989004  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:09:20.051411  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:09:20.051455  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:09:20.086929  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:09:20.086963  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:09:20.118417  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:09:20.118447  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:09:20.149661  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:09:20.149692  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:09:20.180161  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:09:20.180195  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:09:20.219084  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:09:20.219108  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:09:20.231738  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:09:20.231768  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:09:20.295970  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:09:22.797648  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:09:22.807476  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:09:22.807571  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:09:22.837326  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:09:22.837350  203848 cri.go:89] found id: ""
	I1212 01:09:22.837358  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:09:22.837414  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:22.841692  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:09:22.841767  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:09:22.866593  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:09:22.866608  203848 cri.go:89] found id: ""
	I1212 01:09:22.866615  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:09:22.866666  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:22.870285  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:09:22.870366  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:09:22.894746  203848 cri.go:89] found id: ""
	I1212 01:09:22.894774  203848 logs.go:282] 0 containers: []
	W1212 01:09:22.894783  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:09:22.894788  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:09:22.894848  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:09:22.919181  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:09:22.919202  203848 cri.go:89] found id: ""
	I1212 01:09:22.919210  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:09:22.919271  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:22.922898  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:09:22.923024  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:09:22.948970  203848 cri.go:89] found id: ""
	I1212 01:09:22.948994  203848 logs.go:282] 0 containers: []
	W1212 01:09:22.949002  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:09:22.949009  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:09:22.949071  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:09:22.973485  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:09:22.973514  203848 cri.go:89] found id: ""
	I1212 01:09:22.973523  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:09:22.973588  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:22.977222  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:09:22.977306  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:09:23.001344  203848 cri.go:89] found id: ""
	I1212 01:09:23.001377  203848 logs.go:282] 0 containers: []
	W1212 01:09:23.001386  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:09:23.001407  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:09:23.001490  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:09:23.028480  203848 cri.go:89] found id: ""
	I1212 01:09:23.028505  203848 logs.go:282] 0 containers: []
	W1212 01:09:23.028516  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:09:23.028530  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:09:23.028560  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:09:23.044146  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:09:23.044180  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:09:23.088730  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:09:23.088802  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:09:23.125049  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:09:23.125082  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:09:23.163959  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:09:23.163986  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:09:23.223158  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:09:23.223190  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:09:23.284785  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:09:23.284806  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:09:23.284819  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:09:23.320724  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:09:23.320755  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:09:23.350918  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:09:23.350944  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:09:25.880728  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:09:25.895089  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:09:25.895161  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:09:25.937751  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:09:25.937774  203848 cri.go:89] found id: ""
	I1212 01:09:25.937782  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:09:25.937861  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:25.942137  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:09:25.942217  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:09:25.985920  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:09:25.985941  203848 cri.go:89] found id: ""
	I1212 01:09:25.985949  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:09:25.986005  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:25.992866  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:09:25.992944  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:09:26.045441  203848 cri.go:89] found id: ""
	I1212 01:09:26.045471  203848 logs.go:282] 0 containers: []
	W1212 01:09:26.045480  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:09:26.045486  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:09:26.045549  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:09:26.083653  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:09:26.083678  203848 cri.go:89] found id: ""
	I1212 01:09:26.083685  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:09:26.083744  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:26.089983  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:09:26.090071  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:09:26.157397  203848 cri.go:89] found id: ""
	I1212 01:09:26.157425  203848 logs.go:282] 0 containers: []
	W1212 01:09:26.157434  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:09:26.157440  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:09:26.157508  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:09:26.196570  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:09:26.196595  203848 cri.go:89] found id: ""
	I1212 01:09:26.196604  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:09:26.196676  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:26.201171  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:09:26.201257  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:09:26.233449  203848 cri.go:89] found id: ""
	I1212 01:09:26.233485  203848 logs.go:282] 0 containers: []
	W1212 01:09:26.233495  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:09:26.233501  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:09:26.233571  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:09:26.261472  203848 cri.go:89] found id: ""
	I1212 01:09:26.261505  203848 logs.go:282] 0 containers: []
	W1212 01:09:26.261514  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:09:26.261528  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:09:26.261540  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:09:26.303730  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:09:26.303764  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:09:26.345613  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:09:26.345647  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:09:26.395637  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:09:26.395665  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:09:26.460624  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:09:26.460658  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:09:26.474682  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:09:26.474717  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:09:26.554916  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:09:26.554939  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:09:26.554951  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:09:26.596786  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:09:26.596820  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:09:26.634361  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:09:26.634454  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:09:29.167983  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:09:29.180036  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:09:29.180125  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:09:29.205573  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:09:29.205596  203848 cri.go:89] found id: ""
	I1212 01:09:29.205604  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:09:29.205659  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:29.209180  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:09:29.209250  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:09:29.238068  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:09:29.238091  203848 cri.go:89] found id: ""
	I1212 01:09:29.238100  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:09:29.238162  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:29.241849  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:09:29.241922  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:09:29.266540  203848 cri.go:89] found id: ""
	I1212 01:09:29.266566  203848 logs.go:282] 0 containers: []
	W1212 01:09:29.266575  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:09:29.266581  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:09:29.266640  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:09:29.293818  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:09:29.293917  203848 cri.go:89] found id: ""
	I1212 01:09:29.293950  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:09:29.294066  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:29.297905  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:09:29.298062  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:09:29.321942  203848 cri.go:89] found id: ""
	I1212 01:09:29.322005  203848 logs.go:282] 0 containers: []
	W1212 01:09:29.322028  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:09:29.322049  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:09:29.322124  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:09:29.346430  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:09:29.346492  203848 cri.go:89] found id: ""
	I1212 01:09:29.346521  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:09:29.346595  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:29.350308  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:09:29.350433  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:09:29.374534  203848 cri.go:89] found id: ""
	I1212 01:09:29.374600  203848 logs.go:282] 0 containers: []
	W1212 01:09:29.374625  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:09:29.374644  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:09:29.374723  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:09:29.399135  203848 cri.go:89] found id: ""
	I1212 01:09:29.399201  203848 logs.go:282] 0 containers: []
	W1212 01:09:29.399216  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:09:29.399231  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:09:29.399244  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:09:29.432560  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:09:29.432593  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:09:29.471223  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:09:29.471253  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:09:29.505701  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:09:29.505732  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:09:29.533858  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:09:29.533896  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:09:29.560991  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:09:29.561021  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:09:29.619347  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:09:29.619383  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:09:29.632697  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:09:29.632725  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:09:29.702758  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:09:29.702781  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:09:29.702798  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:09:32.231842  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:09:32.245542  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:09:32.245633  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:09:32.292287  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:09:32.292308  203848 cri.go:89] found id: ""
	I1212 01:09:32.292316  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:09:32.292389  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:32.297533  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:09:32.297630  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:09:32.333577  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:09:32.333604  203848 cri.go:89] found id: ""
	I1212 01:09:32.333612  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:09:32.333686  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:32.339420  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:09:32.339509  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:09:32.375214  203848 cri.go:89] found id: ""
	I1212 01:09:32.375241  203848 logs.go:282] 0 containers: []
	W1212 01:09:32.375256  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:09:32.375263  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:09:32.375343  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:09:32.410046  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:09:32.410071  203848 cri.go:89] found id: ""
	I1212 01:09:32.410080  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:09:32.410156  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:32.415806  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:09:32.415887  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:09:32.453336  203848 cri.go:89] found id: ""
	I1212 01:09:32.453366  203848 logs.go:282] 0 containers: []
	W1212 01:09:32.453374  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:09:32.453384  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:09:32.453451  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:09:32.499693  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:09:32.499802  203848 cri.go:89] found id: ""
	I1212 01:09:32.499869  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:09:32.499988  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:32.506133  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:09:32.506347  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:09:32.545630  203848 cri.go:89] found id: ""
	I1212 01:09:32.545705  203848 logs.go:282] 0 containers: []
	W1212 01:09:32.545726  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:09:32.545746  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:09:32.545875  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:09:32.580026  203848 cri.go:89] found id: ""
	I1212 01:09:32.580097  203848 logs.go:282] 0 containers: []
	W1212 01:09:32.580133  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:09:32.580164  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:09:32.580218  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:09:32.621582  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:09:32.621667  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:09:32.683541  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:09:32.683635  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:09:32.736262  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:09:32.736328  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:09:32.817025  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:09:32.817602  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:09:32.835519  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:09:32.835615  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:09:32.958120  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:09:32.958141  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:09:32.958155  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:09:32.994113  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:09:32.994147  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:09:33.024896  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:09:33.024925  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:09:35.557844  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:09:35.567668  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:09:35.567742  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:09:35.591915  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:09:35.591936  203848 cri.go:89] found id: ""
	I1212 01:09:35.591945  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:09:35.592006  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:35.595665  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:09:35.595737  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:09:35.620675  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:09:35.620694  203848 cri.go:89] found id: ""
	I1212 01:09:35.620702  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:09:35.620765  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:35.624601  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:09:35.624675  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:09:35.648221  203848 cri.go:89] found id: ""
	I1212 01:09:35.648243  203848 logs.go:282] 0 containers: []
	W1212 01:09:35.648252  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:09:35.648258  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:09:35.648322  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:09:35.682397  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:09:35.682417  203848 cri.go:89] found id: ""
	I1212 01:09:35.682425  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:09:35.682486  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:35.686227  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:09:35.686324  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:09:35.710918  203848 cri.go:89] found id: ""
	I1212 01:09:35.710944  203848 logs.go:282] 0 containers: []
	W1212 01:09:35.710953  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:09:35.710959  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:09:35.711122  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:09:35.736050  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:09:35.736073  203848 cri.go:89] found id: ""
	I1212 01:09:35.736082  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:09:35.736139  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:35.740104  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:09:35.740192  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:09:35.772446  203848 cri.go:89] found id: ""
	I1212 01:09:35.772510  203848 logs.go:282] 0 containers: []
	W1212 01:09:35.772534  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:09:35.772553  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:09:35.772645  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:09:35.816016  203848 cri.go:89] found id: ""
	I1212 01:09:35.816043  203848 logs.go:282] 0 containers: []
	W1212 01:09:35.816064  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:09:35.816080  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:09:35.816094  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:09:35.848031  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:09:35.848060  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:09:35.861051  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:09:35.861125  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:09:35.894503  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:09:35.894534  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:09:35.921779  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:09:35.921815  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:09:35.952579  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:09:35.952614  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:09:36.011729  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:09:36.011772  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:09:36.083847  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:09:36.083870  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:09:36.083884  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:09:36.119070  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:09:36.119100  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:09:38.646953  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:09:38.657119  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:09:38.657190  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:09:38.680945  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:09:38.680970  203848 cri.go:89] found id: ""
	I1212 01:09:38.680978  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:09:38.681035  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:38.684640  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:09:38.684711  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:09:38.709132  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:09:38.709152  203848 cri.go:89] found id: ""
	I1212 01:09:38.709160  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:09:38.709215  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:38.712904  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:09:38.712979  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:09:38.738022  203848 cri.go:89] found id: ""
	I1212 01:09:38.738044  203848 logs.go:282] 0 containers: []
	W1212 01:09:38.738053  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:09:38.738059  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:09:38.738123  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:09:38.763431  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:09:38.763456  203848 cri.go:89] found id: ""
	I1212 01:09:38.763464  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:09:38.763531  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:38.768034  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:09:38.768158  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:09:38.811984  203848 cri.go:89] found id: ""
	I1212 01:09:38.812019  203848 logs.go:282] 0 containers: []
	W1212 01:09:38.812032  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:09:38.812065  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:09:38.812175  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:09:38.840100  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:09:38.840171  203848 cri.go:89] found id: ""
	I1212 01:09:38.840193  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:09:38.840273  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:38.844873  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:09:38.844997  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:09:38.870748  203848 cri.go:89] found id: ""
	I1212 01:09:38.870823  203848 logs.go:282] 0 containers: []
	W1212 01:09:38.870855  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:09:38.870874  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:09:38.870956  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:09:38.896887  203848 cri.go:89] found id: ""
	I1212 01:09:38.896911  203848 logs.go:282] 0 containers: []
	W1212 01:09:38.896920  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:09:38.896965  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:09:38.896985  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:09:38.929088  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:09:38.929118  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:09:38.987614  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:09:38.987648  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:09:39.000590  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:09:39.000620  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:09:39.030251  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:09:39.030280  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:09:39.063216  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:09:39.063255  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:09:39.091662  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:09:39.091702  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:09:39.137009  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:09:39.137046  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:09:39.200156  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:09:39.200174  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:09:39.200186  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:09:41.738495  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:09:41.749505  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:09:41.749575  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:09:41.776997  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:09:41.777018  203848 cri.go:89] found id: ""
	I1212 01:09:41.777026  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:09:41.777083  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:41.782860  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:09:41.782937  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:09:41.813147  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:09:41.813170  203848 cri.go:89] found id: ""
	I1212 01:09:41.813179  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:09:41.813235  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:41.817559  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:09:41.817656  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:09:41.846763  203848 cri.go:89] found id: ""
	I1212 01:09:41.846796  203848 logs.go:282] 0 containers: []
	W1212 01:09:41.846806  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:09:41.846813  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:09:41.846889  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:09:41.876256  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:09:41.876277  203848 cri.go:89] found id: ""
	I1212 01:09:41.876285  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:09:41.876340  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:41.879970  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:09:41.880040  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:09:41.908107  203848 cri.go:89] found id: ""
	I1212 01:09:41.908171  203848 logs.go:282] 0 containers: []
	W1212 01:09:41.908185  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:09:41.908192  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:09:41.908260  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:09:41.937924  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:09:41.937947  203848 cri.go:89] found id: ""
	I1212 01:09:41.937956  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:09:41.938025  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:41.941948  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:09:41.942035  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:09:41.970724  203848 cri.go:89] found id: ""
	I1212 01:09:41.970757  203848 logs.go:282] 0 containers: []
	W1212 01:09:41.970767  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:09:41.970773  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:09:41.970851  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:09:41.995955  203848 cri.go:89] found id: ""
	I1212 01:09:41.995979  203848 logs.go:282] 0 containers: []
	W1212 01:09:41.995989  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:09:41.996003  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:09:41.996013  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:09:42.058812  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:09:42.058848  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:09:42.073939  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:09:42.073975  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:09:42.152582  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:09:42.152677  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:09:42.152713  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:09:42.189640  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:09:42.189675  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:09:42.223365  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:09:42.223406  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:09:42.268768  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:09:42.268803  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:09:42.309321  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:09:42.309353  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:09:42.339421  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:09:42.339454  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:09:44.883164  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:09:44.893711  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:09:44.893803  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:09:44.919219  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:09:44.919238  203848 cri.go:89] found id: ""
	I1212 01:09:44.919246  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:09:44.919308  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:44.924707  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:09:44.924783  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:09:44.959308  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:09:44.959330  203848 cri.go:89] found id: ""
	I1212 01:09:44.959338  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:09:44.959394  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:44.963092  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:09:44.963170  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:09:44.992261  203848 cri.go:89] found id: ""
	I1212 01:09:44.992286  203848 logs.go:282] 0 containers: []
	W1212 01:09:44.992295  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:09:44.992301  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:09:44.992364  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:09:45.039135  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:09:45.039157  203848 cri.go:89] found id: ""
	I1212 01:09:45.039166  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:09:45.039226  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:45.044930  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:09:45.045011  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:09:45.121289  203848 cri.go:89] found id: ""
	I1212 01:09:45.121315  203848 logs.go:282] 0 containers: []
	W1212 01:09:45.121324  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:09:45.121330  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:09:45.121402  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:09:45.170749  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:09:45.170836  203848 cri.go:89] found id: ""
	I1212 01:09:45.170861  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:09:45.171020  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:45.176505  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:09:45.176661  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:09:45.230748  203848 cri.go:89] found id: ""
	I1212 01:09:45.230781  203848 logs.go:282] 0 containers: []
	W1212 01:09:45.230791  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:09:45.230800  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:09:45.230886  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:09:45.274878  203848 cri.go:89] found id: ""
	I1212 01:09:45.274921  203848 logs.go:282] 0 containers: []
	W1212 01:09:45.274949  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:09:45.274969  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:09:45.275011  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:09:45.335934  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:09:45.335969  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:09:45.385088  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:09:45.385136  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:09:45.412260  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:09:45.412286  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:09:45.446202  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:09:45.446239  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:09:45.474717  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:09:45.474756  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:09:45.520139  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:09:45.520166  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:09:45.540082  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:09:45.540116  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:09:45.614001  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:09:45.614022  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:09:45.614036  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:09:48.147435  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:09:48.158430  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:09:48.158507  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:09:48.185737  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:09:48.185757  203848 cri.go:89] found id: ""
	I1212 01:09:48.185789  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:09:48.185848  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:48.189890  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:09:48.189977  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:09:48.217387  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:09:48.217411  203848 cri.go:89] found id: ""
	I1212 01:09:48.217420  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:09:48.217476  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:48.221288  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:09:48.221365  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:09:48.246901  203848 cri.go:89] found id: ""
	I1212 01:09:48.246927  203848 logs.go:282] 0 containers: []
	W1212 01:09:48.246937  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:09:48.246942  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:09:48.247062  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:09:48.274052  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:09:48.274093  203848 cri.go:89] found id: ""
	I1212 01:09:48.274103  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:09:48.274161  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:48.278063  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:09:48.278138  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:09:48.306242  203848 cri.go:89] found id: ""
	I1212 01:09:48.306270  203848 logs.go:282] 0 containers: []
	W1212 01:09:48.306281  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:09:48.306287  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:09:48.306347  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:09:48.331697  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:09:48.331732  203848 cri.go:89] found id: ""
	I1212 01:09:48.331741  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:09:48.331820  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:48.335655  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:09:48.335752  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:09:48.366386  203848 cri.go:89] found id: ""
	I1212 01:09:48.366407  203848 logs.go:282] 0 containers: []
	W1212 01:09:48.366415  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:09:48.366421  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:09:48.366483  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:09:48.393003  203848 cri.go:89] found id: ""
	I1212 01:09:48.393070  203848 logs.go:282] 0 containers: []
	W1212 01:09:48.393093  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:09:48.393115  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:09:48.393128  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:09:48.435948  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:09:48.435982  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:09:48.465187  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:09:48.465218  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:09:48.496571  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:09:48.496601  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:09:48.509765  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:09:48.509793  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:09:48.615392  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:09:48.615414  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:09:48.615428  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:09:48.652862  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:09:48.652937  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:09:48.690047  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:09:48.690072  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:09:48.751156  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:09:48.751193  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:09:51.295167  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:09:51.306088  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:09:51.306163  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:09:51.338738  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:09:51.338757  203848 cri.go:89] found id: ""
	I1212 01:09:51.338765  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:09:51.338821  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:51.343619  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:09:51.343687  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:09:51.373197  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:09:51.373215  203848 cri.go:89] found id: ""
	I1212 01:09:51.373223  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:09:51.373278  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:51.380334  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:09:51.380420  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:09:51.418211  203848 cri.go:89] found id: ""
	I1212 01:09:51.418273  203848 logs.go:282] 0 containers: []
	W1212 01:09:51.418294  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:09:51.418312  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:09:51.418417  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:09:51.452836  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:09:51.452886  203848 cri.go:89] found id: ""
	I1212 01:09:51.452907  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:09:51.452997  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:51.457593  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:09:51.457702  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:09:51.489234  203848 cri.go:89] found id: ""
	I1212 01:09:51.489297  203848 logs.go:282] 0 containers: []
	W1212 01:09:51.489319  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:09:51.489338  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:09:51.489438  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:09:51.517841  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:09:51.517866  203848 cri.go:89] found id: ""
	I1212 01:09:51.517889  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:09:51.517964  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:51.523094  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:09:51.523181  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:09:51.555109  203848 cri.go:89] found id: ""
	I1212 01:09:51.555145  203848 logs.go:282] 0 containers: []
	W1212 01:09:51.555154  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:09:51.555161  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:09:51.555232  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:09:51.639782  203848 cri.go:89] found id: ""
	I1212 01:09:51.639820  203848 logs.go:282] 0 containers: []
	W1212 01:09:51.639829  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:09:51.639851  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:09:51.639876  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:09:51.710283  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:09:51.710320  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:09:51.798196  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:09:51.798214  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:09:51.798226  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:09:51.839223  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:09:51.839255  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:09:51.897802  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:09:51.897833  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:09:51.946176  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:09:51.946209  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:09:51.977208  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:09:51.977244  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:09:51.991613  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:09:51.991637  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:09:52.035688  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:09:52.035879  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:09:54.581517  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:09:54.591512  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:09:54.591581  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:09:54.616326  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:09:54.616348  203848 cri.go:89] found id: ""
	I1212 01:09:54.616356  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:09:54.616413  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:54.620289  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:09:54.620369  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:09:54.647555  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:09:54.647578  203848 cri.go:89] found id: ""
	I1212 01:09:54.647586  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:09:54.647648  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:54.651465  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:09:54.651542  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:09:54.676644  203848 cri.go:89] found id: ""
	I1212 01:09:54.676719  203848 logs.go:282] 0 containers: []
	W1212 01:09:54.676736  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:09:54.676743  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:09:54.676818  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:09:54.702398  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:09:54.702421  203848 cri.go:89] found id: ""
	I1212 01:09:54.702430  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:09:54.702500  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:54.706201  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:09:54.706275  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:09:54.733049  203848 cri.go:89] found id: ""
	I1212 01:09:54.733081  203848 logs.go:282] 0 containers: []
	W1212 01:09:54.733090  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:09:54.733097  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:09:54.733167  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:09:54.761499  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:09:54.761522  203848 cri.go:89] found id: ""
	I1212 01:09:54.761530  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:09:54.761586  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:54.765479  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:09:54.765549  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:09:54.791963  203848 cri.go:89] found id: ""
	I1212 01:09:54.791996  203848 logs.go:282] 0 containers: []
	W1212 01:09:54.792006  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:09:54.792013  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:09:54.792092  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:09:54.816407  203848 cri.go:89] found id: ""
	I1212 01:09:54.816428  203848 logs.go:282] 0 containers: []
	W1212 01:09:54.816436  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:09:54.816452  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:09:54.816463  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:09:54.855730  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:09:54.855768  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:09:54.900328  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:09:54.900358  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:09:54.945474  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:09:54.945503  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:09:54.980792  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:09:54.980828  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:09:55.054286  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:09:55.054375  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:09:55.070087  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:09:55.070111  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:09:55.159753  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:09:55.159779  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:09:55.159792  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:09:55.207897  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:09:55.207924  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:09:57.771135  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:09:57.782638  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:09:57.782710  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:09:57.812034  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:09:57.812068  203848 cri.go:89] found id: ""
	I1212 01:09:57.812077  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:09:57.812136  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:57.816016  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:09:57.816087  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:09:57.842167  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:09:57.842188  203848 cri.go:89] found id: ""
	I1212 01:09:57.842196  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:09:57.842251  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:57.846208  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:09:57.846280  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:09:57.873349  203848 cri.go:89] found id: ""
	I1212 01:09:57.873374  203848 logs.go:282] 0 containers: []
	W1212 01:09:57.873383  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:09:57.873389  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:09:57.873451  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:09:57.899362  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:09:57.899383  203848 cri.go:89] found id: ""
	I1212 01:09:57.899391  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:09:57.899453  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:57.903166  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:09:57.903246  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:09:57.929203  203848 cri.go:89] found id: ""
	I1212 01:09:57.929231  203848 logs.go:282] 0 containers: []
	W1212 01:09:57.929240  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:09:57.929247  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:09:57.929325  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:09:57.955024  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:09:57.955047  203848 cri.go:89] found id: ""
	I1212 01:09:57.955056  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:09:57.955112  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:09:57.958741  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:09:57.958813  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:09:57.985217  203848 cri.go:89] found id: ""
	I1212 01:09:57.985238  203848 logs.go:282] 0 containers: []
	W1212 01:09:57.985246  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:09:57.985252  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:09:57.985310  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:09:58.015772  203848 cri.go:89] found id: ""
	I1212 01:09:58.015800  203848 logs.go:282] 0 containers: []
	W1212 01:09:58.015810  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:09:58.015851  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:09:58.015867  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:09:58.073922  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:09:58.073961  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:09:58.086859  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:09:58.086889  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:09:58.135845  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:09:58.135876  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:09:58.163136  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:09:58.163170  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:09:58.191325  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:09:58.191352  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:09:58.254215  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:09:58.254232  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:09:58.254277  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:09:58.287351  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:09:58.287385  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:09:58.330893  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:09:58.330922  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:10:00.862323  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:10:00.872829  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:10:00.872903  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:10:00.902232  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:10:00.902261  203848 cri.go:89] found id: ""
	I1212 01:10:00.902270  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:10:00.902332  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:00.906172  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:10:00.906251  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:10:00.934791  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:10:00.934811  203848 cri.go:89] found id: ""
	I1212 01:10:00.934825  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:10:00.934879  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:00.938673  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:10:00.938749  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:10:00.963328  203848 cri.go:89] found id: ""
	I1212 01:10:00.963352  203848 logs.go:282] 0 containers: []
	W1212 01:10:00.963361  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:10:00.963367  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:10:00.963425  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:10:00.992039  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:10:00.992064  203848 cri.go:89] found id: ""
	I1212 01:10:00.992072  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:10:00.992134  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:00.996001  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:10:00.996079  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:10:01.026623  203848 cri.go:89] found id: ""
	I1212 01:10:01.026646  203848 logs.go:282] 0 containers: []
	W1212 01:10:01.026654  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:10:01.026660  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:10:01.026725  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:10:01.052829  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:10:01.052857  203848 cri.go:89] found id: ""
	I1212 01:10:01.052866  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:10:01.052925  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:01.056891  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:10:01.056974  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:10:01.085846  203848 cri.go:89] found id: ""
	I1212 01:10:01.085870  203848 logs.go:282] 0 containers: []
	W1212 01:10:01.085879  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:10:01.085885  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:10:01.085950  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:10:01.115823  203848 cri.go:89] found id: ""
	I1212 01:10:01.115859  203848 logs.go:282] 0 containers: []
	W1212 01:10:01.115870  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:10:01.115886  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:10:01.115928  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:10:01.184781  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:10:01.184800  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:10:01.184813  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:10:01.217975  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:10:01.218010  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:10:01.247095  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:10:01.247126  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:10:01.280412  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:10:01.280444  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:10:01.312746  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:10:01.312787  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:10:01.341149  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:10:01.341182  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:10:01.400001  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:10:01.400040  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:10:01.413887  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:10:01.413916  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:10:03.950240  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:10:03.960421  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:10:03.960499  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:10:03.994215  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:10:03.994238  203848 cri.go:89] found id: ""
	I1212 01:10:03.994247  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:10:03.994305  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:03.998088  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:10:03.998160  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:10:04.025578  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:10:04.025600  203848 cri.go:89] found id: ""
	I1212 01:10:04.025608  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:10:04.025670  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:04.029848  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:10:04.029935  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:10:04.057643  203848 cri.go:89] found id: ""
	I1212 01:10:04.057752  203848 logs.go:282] 0 containers: []
	W1212 01:10:04.057790  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:10:04.057816  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:10:04.057952  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:10:04.084286  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:10:04.084309  203848 cri.go:89] found id: ""
	I1212 01:10:04.084319  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:10:04.084380  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:04.088338  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:10:04.088414  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:10:04.113880  203848 cri.go:89] found id: ""
	I1212 01:10:04.113905  203848 logs.go:282] 0 containers: []
	W1212 01:10:04.113914  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:10:04.113921  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:10:04.113981  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:10:04.140634  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:10:04.140656  203848 cri.go:89] found id: ""
	I1212 01:10:04.140664  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:10:04.140723  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:04.144805  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:10:04.144918  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:10:04.169582  203848 cri.go:89] found id: ""
	I1212 01:10:04.169607  203848 logs.go:282] 0 containers: []
	W1212 01:10:04.169616  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:10:04.169623  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:10:04.169685  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:10:04.195339  203848 cri.go:89] found id: ""
	I1212 01:10:04.195364  203848 logs.go:282] 0 containers: []
	W1212 01:10:04.195373  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:10:04.195389  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:10:04.195400  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:10:04.208248  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:10:04.208276  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:10:04.270230  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:10:04.270253  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:10:04.270267  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:10:04.339593  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:10:04.339628  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:10:04.375647  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:10:04.375679  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:10:04.406705  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:10:04.406735  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:10:04.436922  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:10:04.436956  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:10:04.467380  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:10:04.467409  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:10:04.526890  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:10:04.526925  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:10:07.059931  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:10:07.071899  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:10:07.071970  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:10:07.096945  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:10:07.096964  203848 cri.go:89] found id: ""
	I1212 01:10:07.096972  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:10:07.097028  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:07.100914  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:10:07.100984  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:10:07.129345  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:10:07.129367  203848 cri.go:89] found id: ""
	I1212 01:10:07.129376  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:10:07.129435  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:07.133202  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:10:07.133293  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:10:07.160036  203848 cri.go:89] found id: ""
	I1212 01:10:07.160110  203848 logs.go:282] 0 containers: []
	W1212 01:10:07.160127  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:10:07.160134  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:10:07.160192  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:10:07.190080  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:10:07.190102  203848 cri.go:89] found id: ""
	I1212 01:10:07.190110  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:10:07.190170  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:07.193986  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:10:07.194059  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:10:07.221571  203848 cri.go:89] found id: ""
	I1212 01:10:07.221597  203848 logs.go:282] 0 containers: []
	W1212 01:10:07.221606  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:10:07.221613  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:10:07.221672  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:10:07.249646  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:10:07.249671  203848 cri.go:89] found id: ""
	I1212 01:10:07.249679  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:10:07.249747  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:07.253460  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:10:07.253535  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:10:07.281646  203848 cri.go:89] found id: ""
	I1212 01:10:07.281674  203848 logs.go:282] 0 containers: []
	W1212 01:10:07.281683  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:10:07.281689  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:10:07.281753  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:10:07.317359  203848 cri.go:89] found id: ""
	I1212 01:10:07.317392  203848 logs.go:282] 0 containers: []
	W1212 01:10:07.317401  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:10:07.317421  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:10:07.317439  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:10:07.380930  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:10:07.380968  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:10:07.393714  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:10:07.393743  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:10:07.428871  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:10:07.428905  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:10:07.460924  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:10:07.460954  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:10:07.486888  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:10:07.486915  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:10:07.517588  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:10:07.517618  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:10:07.547488  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:10:07.547518  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:10:07.613211  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:10:07.613230  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:10:07.613243  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:10:10.140471  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:10:10.150679  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:10:10.150749  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:10:10.175720  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:10:10.175745  203848 cri.go:89] found id: ""
	I1212 01:10:10.175753  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:10:10.175811  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:10.179735  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:10:10.179810  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:10:10.205487  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:10:10.205509  203848 cri.go:89] found id: ""
	I1212 01:10:10.205517  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:10:10.205574  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:10.209322  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:10:10.209404  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:10:10.239987  203848 cri.go:89] found id: ""
	I1212 01:10:10.240011  203848 logs.go:282] 0 containers: []
	W1212 01:10:10.240021  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:10:10.240036  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:10:10.240100  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:10:10.264532  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:10:10.264556  203848 cri.go:89] found id: ""
	I1212 01:10:10.264564  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:10:10.264645  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:10.268676  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:10:10.268761  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:10:10.302647  203848 cri.go:89] found id: ""
	I1212 01:10:10.302673  203848 logs.go:282] 0 containers: []
	W1212 01:10:10.302682  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:10:10.302688  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:10:10.302748  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:10:10.335485  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:10:10.335509  203848 cri.go:89] found id: ""
	I1212 01:10:10.335517  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:10:10.335582  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:10.339708  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:10:10.339781  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:10:10.369272  203848 cri.go:89] found id: ""
	I1212 01:10:10.369296  203848 logs.go:282] 0 containers: []
	W1212 01:10:10.369305  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:10:10.369312  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:10:10.369376  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:10:10.395871  203848 cri.go:89] found id: ""
	I1212 01:10:10.395897  203848 logs.go:282] 0 containers: []
	W1212 01:10:10.395906  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:10:10.395920  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:10:10.395931  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:10:10.453115  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:10:10.453150  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:10:10.521677  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:10:10.521711  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:10:10.521725  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:10:10.553904  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:10:10.553934  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:10:10.581478  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:10:10.581511  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:10:10.620493  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:10:10.620527  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:10:10.652829  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:10:10.652858  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:10:10.665570  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:10:10.665598  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:10:10.699178  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:10:10.699225  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:10:13.229708  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:10:13.241896  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:10:13.241996  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:10:13.270090  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:10:13.270110  203848 cri.go:89] found id: ""
	I1212 01:10:13.270118  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:10:13.270180  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:13.273989  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:10:13.274064  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:10:13.303472  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:10:13.303495  203848 cri.go:89] found id: ""
	I1212 01:10:13.303504  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:10:13.303560  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:13.307685  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:10:13.307756  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:10:13.339049  203848 cri.go:89] found id: ""
	I1212 01:10:13.339074  203848 logs.go:282] 0 containers: []
	W1212 01:10:13.339083  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:10:13.339089  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:10:13.339156  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:10:13.368996  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:10:13.369015  203848 cri.go:89] found id: ""
	I1212 01:10:13.369023  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:10:13.369079  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:13.372716  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:10:13.372789  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:10:13.397094  203848 cri.go:89] found id: ""
	I1212 01:10:13.397117  203848 logs.go:282] 0 containers: []
	W1212 01:10:13.397126  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:10:13.397132  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:10:13.397223  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:10:13.421949  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:10:13.421974  203848 cri.go:89] found id: ""
	I1212 01:10:13.421983  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:10:13.422074  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:13.425818  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:10:13.425921  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:10:13.450774  203848 cri.go:89] found id: ""
	I1212 01:10:13.450815  203848 logs.go:282] 0 containers: []
	W1212 01:10:13.450825  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:10:13.450832  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:10:13.450901  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:10:13.477480  203848 cri.go:89] found id: ""
	I1212 01:10:13.477517  203848 logs.go:282] 0 containers: []
	W1212 01:10:13.477527  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:10:13.477540  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:10:13.477555  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:10:13.511339  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:10:13.511371  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:10:13.543559  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:10:13.543594  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:10:13.572713  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:10:13.572741  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:10:13.603987  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:10:13.604019  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:10:13.661700  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:10:13.661735  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:10:13.689782  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:10:13.689818  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:10:13.718167  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:10:13.718197  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:10:13.730685  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:10:13.730722  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:10:13.803795  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:10:16.305241  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:10:16.318084  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:10:16.318193  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:10:16.373021  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:10:16.373046  203848 cri.go:89] found id: ""
	I1212 01:10:16.373054  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:10:16.373113  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:16.377335  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:10:16.377409  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:10:16.405383  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:10:16.405402  203848 cri.go:89] found id: ""
	I1212 01:10:16.405410  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:10:16.405467  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:16.409440  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:10:16.409520  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:10:16.434813  203848 cri.go:89] found id: ""
	I1212 01:10:16.434840  203848 logs.go:282] 0 containers: []
	W1212 01:10:16.434849  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:10:16.434856  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:10:16.434920  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:10:16.469327  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:10:16.469350  203848 cri.go:89] found id: ""
	I1212 01:10:16.469358  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:10:16.469419  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:16.472947  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:10:16.473018  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:10:16.496859  203848 cri.go:89] found id: ""
	I1212 01:10:16.496883  203848 logs.go:282] 0 containers: []
	W1212 01:10:16.496892  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:10:16.496898  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:10:16.496984  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:10:16.523640  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:10:16.523661  203848 cri.go:89] found id: ""
	I1212 01:10:16.523669  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:10:16.523734  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:16.527545  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:10:16.527620  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:10:16.553389  203848 cri.go:89] found id: ""
	I1212 01:10:16.553417  203848 logs.go:282] 0 containers: []
	W1212 01:10:16.553426  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:10:16.553433  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:10:16.553493  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:10:16.578643  203848 cri.go:89] found id: ""
	I1212 01:10:16.578676  203848 logs.go:282] 0 containers: []
	W1212 01:10:16.578686  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:10:16.578702  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:10:16.578725  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:10:16.592299  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:10:16.592379  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:10:16.662772  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:10:16.662793  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:10:16.662805  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:10:16.700247  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:10:16.700279  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:10:16.729144  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:10:16.729174  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:10:16.759345  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:10:16.759379  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:10:16.796463  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:10:16.796493  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:10:16.855726  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:10:16.855759  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:10:16.899254  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:10:16.899284  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:10:19.432282  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:10:19.449449  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:10:19.449528  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:10:19.502758  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:10:19.502782  203848 cri.go:89] found id: ""
	I1212 01:10:19.502790  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:10:19.502851  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:19.511047  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:10:19.511145  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:10:19.558205  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:10:19.558236  203848 cri.go:89] found id: ""
	I1212 01:10:19.558244  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:10:19.558301  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:19.565499  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:10:19.565583  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:10:19.617626  203848 cri.go:89] found id: ""
	I1212 01:10:19.617659  203848 logs.go:282] 0 containers: []
	W1212 01:10:19.617668  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:10:19.617675  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:10:19.617741  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:10:19.654222  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:10:19.654240  203848 cri.go:89] found id: ""
	I1212 01:10:19.654300  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:10:19.654389  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:19.662486  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:10:19.662610  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:10:19.705519  203848 cri.go:89] found id: ""
	I1212 01:10:19.705592  203848 logs.go:282] 0 containers: []
	W1212 01:10:19.705615  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:10:19.705633  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:10:19.705737  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:10:19.744270  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:10:19.744338  203848 cri.go:89] found id: ""
	I1212 01:10:19.744361  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:10:19.744451  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:19.749551  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:10:19.749694  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:10:19.781973  203848 cri.go:89] found id: ""
	I1212 01:10:19.782045  203848 logs.go:282] 0 containers: []
	W1212 01:10:19.782068  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:10:19.782087  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:10:19.782174  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:10:19.809421  203848 cri.go:89] found id: ""
	I1212 01:10:19.809493  203848 logs.go:282] 0 containers: []
	W1212 01:10:19.809515  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:10:19.809541  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:10:19.809580  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:10:19.825173  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:10:19.825264  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:10:19.916107  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:10:19.916167  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:10:19.916205  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:10:19.957316  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:10:19.957449  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:10:19.992092  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:10:19.992163  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:10:20.078744  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:10:20.078786  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:10:20.163761  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:10:20.163794  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:10:20.215932  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:10:20.215965  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:10:20.259214  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:10:20.259250  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:10:22.792012  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:10:22.801942  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:10:22.802014  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:10:22.830232  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:10:22.830260  203848 cri.go:89] found id: ""
	I1212 01:10:22.830269  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:10:22.830326  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:22.833904  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:10:22.833977  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:10:22.864793  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:10:22.864814  203848 cri.go:89] found id: ""
	I1212 01:10:22.864822  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:10:22.864877  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:22.868653  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:10:22.868722  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:10:22.892053  203848 cri.go:89] found id: ""
	I1212 01:10:22.892081  203848 logs.go:282] 0 containers: []
	W1212 01:10:22.892089  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:10:22.892095  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:10:22.892154  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:10:22.917627  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:10:22.917653  203848 cri.go:89] found id: ""
	I1212 01:10:22.917662  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:10:22.917718  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:22.921379  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:10:22.921458  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:10:22.950108  203848 cri.go:89] found id: ""
	I1212 01:10:22.950129  203848 logs.go:282] 0 containers: []
	W1212 01:10:22.950138  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:10:22.950144  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:10:22.950204  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:10:22.975932  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:10:22.975953  203848 cri.go:89] found id: ""
	I1212 01:10:22.975961  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:10:22.976017  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:22.979799  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:10:22.979902  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:10:23.006656  203848 cri.go:89] found id: ""
	I1212 01:10:23.006687  203848 logs.go:282] 0 containers: []
	W1212 01:10:23.006696  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:10:23.006703  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:10:23.006787  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:10:23.043636  203848 cri.go:89] found id: ""
	I1212 01:10:23.043662  203848 logs.go:282] 0 containers: []
	W1212 01:10:23.043671  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:10:23.043685  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:10:23.043696  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:10:23.112900  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:10:23.112935  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:10:23.126129  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:10:23.126157  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:10:23.159888  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:10:23.159920  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:10:23.189906  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:10:23.189937  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:10:23.253921  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:10:23.253984  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:10:23.254004  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:10:23.288432  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:10:23.288463  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:10:23.317395  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:10:23.317427  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:10:23.348528  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:10:23.348563  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:10:25.881383  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:10:25.891875  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:10:25.891946  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:10:25.922462  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:10:25.922483  203848 cri.go:89] found id: ""
	I1212 01:10:25.922491  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:10:25.922558  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:25.926546  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:10:25.926619  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:10:25.952324  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:10:25.952346  203848 cri.go:89] found id: ""
	I1212 01:10:25.952354  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:10:25.952411  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:25.956316  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:10:25.956387  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:10:25.982976  203848 cri.go:89] found id: ""
	I1212 01:10:25.983012  203848 logs.go:282] 0 containers: []
	W1212 01:10:25.983022  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:10:25.983028  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:10:25.983095  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:10:26.012988  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:10:26.013010  203848 cri.go:89] found id: ""
	I1212 01:10:26.013019  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:10:26.013100  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:26.017750  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:10:26.017854  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:10:26.056179  203848 cri.go:89] found id: ""
	I1212 01:10:26.056206  203848 logs.go:282] 0 containers: []
	W1212 01:10:26.056215  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:10:26.056238  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:10:26.056304  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:10:26.094662  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:10:26.094686  203848 cri.go:89] found id: ""
	I1212 01:10:26.094695  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:10:26.094755  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:26.099190  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:10:26.099272  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:10:26.133219  203848 cri.go:89] found id: ""
	I1212 01:10:26.133285  203848 logs.go:282] 0 containers: []
	W1212 01:10:26.133300  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:10:26.133307  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:10:26.133372  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:10:26.159721  203848 cri.go:89] found id: ""
	I1212 01:10:26.159755  203848 logs.go:282] 0 containers: []
	W1212 01:10:26.159764  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:10:26.159809  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:10:26.159826  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:10:26.193163  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:10:26.193197  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:10:26.221893  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:10:26.221923  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:10:26.285148  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:10:26.285184  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:10:26.298722  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:10:26.298754  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:10:26.367403  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:10:26.367424  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:10:26.367437  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:10:26.399444  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:10:26.399475  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:10:26.430914  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:10:26.430942  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:10:26.464685  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:10:26.464715  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:10:28.996365  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:10:29.009177  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:10:29.009257  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:10:29.040397  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:10:29.040420  203848 cri.go:89] found id: ""
	I1212 01:10:29.040428  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:10:29.040485  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:29.044582  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:10:29.044653  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:10:29.073484  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:10:29.073507  203848 cri.go:89] found id: ""
	I1212 01:10:29.073515  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:10:29.073571  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:29.077670  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:10:29.077743  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:10:29.113792  203848 cri.go:89] found id: ""
	I1212 01:10:29.113816  203848 logs.go:282] 0 containers: []
	W1212 01:10:29.113861  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:10:29.113872  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:10:29.113935  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:10:29.138714  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:10:29.138737  203848 cri.go:89] found id: ""
	I1212 01:10:29.138745  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:10:29.138802  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:29.142804  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:10:29.142879  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:10:29.172977  203848 cri.go:89] found id: ""
	I1212 01:10:29.173003  203848 logs.go:282] 0 containers: []
	W1212 01:10:29.173012  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:10:29.173019  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:10:29.173097  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:10:29.198685  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:10:29.198705  203848 cri.go:89] found id: ""
	I1212 01:10:29.198720  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:10:29.198776  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:29.202529  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:10:29.202602  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:10:29.232608  203848 cri.go:89] found id: ""
	I1212 01:10:29.232636  203848 logs.go:282] 0 containers: []
	W1212 01:10:29.232651  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:10:29.232657  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:10:29.232742  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:10:29.261306  203848 cri.go:89] found id: ""
	I1212 01:10:29.261331  203848 logs.go:282] 0 containers: []
	W1212 01:10:29.261340  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:10:29.261354  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:10:29.261368  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:10:29.329484  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:10:29.329507  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:10:29.329520  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:10:29.373786  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:10:29.373830  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:10:29.408146  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:10:29.408225  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:10:29.446026  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:10:29.446055  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:10:29.477680  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:10:29.477706  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:10:29.507539  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:10:29.507574  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:10:29.536867  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:10:29.536897  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:10:29.598718  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:10:29.598753  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:10:32.119134  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:10:32.129577  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:10:32.129670  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:10:32.155664  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:10:32.155686  203848 cri.go:89] found id: ""
	I1212 01:10:32.155694  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:10:32.155752  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:32.159710  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:10:32.159785  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:10:32.186927  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:10:32.186949  203848 cri.go:89] found id: ""
	I1212 01:10:32.186957  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:10:32.187059  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:32.190958  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:10:32.191066  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:10:32.216100  203848 cri.go:89] found id: ""
	I1212 01:10:32.216127  203848 logs.go:282] 0 containers: []
	W1212 01:10:32.216136  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:10:32.216142  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:10:32.216205  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:10:32.242548  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:10:32.242572  203848 cri.go:89] found id: ""
	I1212 01:10:32.242581  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:10:32.242643  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:32.246463  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:10:32.246538  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:10:32.272170  203848 cri.go:89] found id: ""
	I1212 01:10:32.272199  203848 logs.go:282] 0 containers: []
	W1212 01:10:32.272209  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:10:32.272216  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:10:32.272284  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:10:32.296553  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:10:32.296575  203848 cri.go:89] found id: ""
	I1212 01:10:32.296583  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:10:32.296638  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:32.300343  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:10:32.300421  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:10:32.324606  203848 cri.go:89] found id: ""
	I1212 01:10:32.324665  203848 logs.go:282] 0 containers: []
	W1212 01:10:32.324674  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:10:32.324680  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:10:32.324737  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:10:32.348532  203848 cri.go:89] found id: ""
	I1212 01:10:32.348560  203848 logs.go:282] 0 containers: []
	W1212 01:10:32.348569  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:10:32.348583  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:10:32.348596  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:10:32.377427  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:10:32.377486  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:10:32.404996  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:10:32.405021  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:10:32.466962  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:10:32.467011  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:10:32.479855  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:10:32.479885  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:10:32.511573  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:10:32.511605  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:10:32.576313  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:10:32.576338  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:10:32.576351  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:10:32.610598  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:10:32.610629  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:10:32.639381  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:10:32.639415  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:10:35.181544  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:10:35.192700  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:10:35.192773  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:10:35.219030  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:10:35.219058  203848 cri.go:89] found id: ""
	I1212 01:10:35.219067  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:10:35.219128  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:35.223109  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:10:35.223183  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:10:35.250146  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:10:35.250167  203848 cri.go:89] found id: ""
	I1212 01:10:35.250175  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:10:35.250241  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:35.254275  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:10:35.254360  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:10:35.285392  203848 cri.go:89] found id: ""
	I1212 01:10:35.285414  203848 logs.go:282] 0 containers: []
	W1212 01:10:35.285423  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:10:35.285433  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:10:35.285499  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:10:35.312190  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:10:35.312212  203848 cri.go:89] found id: ""
	I1212 01:10:35.312221  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:10:35.312294  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:35.316383  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:10:35.316463  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:10:35.342905  203848 cri.go:89] found id: ""
	I1212 01:10:35.342931  203848 logs.go:282] 0 containers: []
	W1212 01:10:35.342941  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:10:35.342947  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:10:35.343064  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:10:35.368862  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:10:35.368885  203848 cri.go:89] found id: ""
	I1212 01:10:35.368894  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:10:35.368954  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:35.372886  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:10:35.372967  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:10:35.397843  203848 cri.go:89] found id: ""
	I1212 01:10:35.397871  203848 logs.go:282] 0 containers: []
	W1212 01:10:35.397880  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:10:35.397886  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:10:35.397956  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:10:35.423604  203848 cri.go:89] found id: ""
	I1212 01:10:35.423630  203848 logs.go:282] 0 containers: []
	W1212 01:10:35.423645  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:10:35.423661  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:10:35.423697  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:10:35.436911  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:10:35.436937  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:10:35.499953  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:10:35.499986  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:10:35.500000  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:10:35.528799  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:10:35.528834  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:10:35.556883  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:10:35.556910  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:10:35.615042  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:10:35.615079  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:10:35.651781  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:10:35.651814  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:10:35.683827  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:10:35.683856  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:10:35.711281  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:10:35.711311  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:10:38.249449  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:10:38.259382  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:10:38.259461  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:10:38.291884  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:10:38.291909  203848 cri.go:89] found id: ""
	I1212 01:10:38.291918  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:10:38.291975  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:38.295540  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:10:38.295610  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:10:38.320271  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:10:38.320293  203848 cri.go:89] found id: ""
	I1212 01:10:38.320301  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:10:38.320357  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:38.324001  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:10:38.324072  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:10:38.347402  203848 cri.go:89] found id: ""
	I1212 01:10:38.347427  203848 logs.go:282] 0 containers: []
	W1212 01:10:38.347436  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:10:38.347442  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:10:38.347499  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:10:38.372149  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:10:38.372168  203848 cri.go:89] found id: ""
	I1212 01:10:38.372176  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:10:38.372236  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:38.375904  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:10:38.375981  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:10:38.400509  203848 cri.go:89] found id: ""
	I1212 01:10:38.400533  203848 logs.go:282] 0 containers: []
	W1212 01:10:38.400542  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:10:38.400548  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:10:38.400606  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:10:38.424960  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:10:38.425029  203848 cri.go:89] found id: ""
	I1212 01:10:38.425045  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:10:38.425109  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:38.429155  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:10:38.429232  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:10:38.457422  203848 cri.go:89] found id: ""
	I1212 01:10:38.457495  203848 logs.go:282] 0 containers: []
	W1212 01:10:38.457517  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:10:38.457536  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:10:38.457626  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:10:38.482085  203848 cri.go:89] found id: ""
	I1212 01:10:38.482158  203848 logs.go:282] 0 containers: []
	W1212 01:10:38.482181  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:10:38.482209  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:10:38.482244  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:10:38.508521  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:10:38.508554  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:10:38.537061  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:10:38.537094  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:10:38.565265  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:10:38.565293  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:10:38.623541  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:10:38.623575  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:10:38.655032  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:10:38.655060  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:10:38.668107  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:10:38.668136  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:10:38.734040  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:10:38.734061  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:10:38.734074  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:10:38.768354  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:10:38.768388  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:10:41.316123  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:10:41.326360  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:10:41.326433  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:10:41.355896  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:10:41.355915  203848 cri.go:89] found id: ""
	I1212 01:10:41.355923  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:10:41.355980  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:41.360168  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:10:41.360239  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:10:41.385007  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:10:41.385029  203848 cri.go:89] found id: ""
	I1212 01:10:41.385037  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:10:41.385093  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:41.388963  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:10:41.389044  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:10:41.415600  203848 cri.go:89] found id: ""
	I1212 01:10:41.415625  203848 logs.go:282] 0 containers: []
	W1212 01:10:41.415634  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:10:41.415640  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:10:41.415702  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:10:41.440921  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:10:41.440944  203848 cri.go:89] found id: ""
	I1212 01:10:41.440952  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:10:41.441009  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:41.444710  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:10:41.444789  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:10:41.478190  203848 cri.go:89] found id: ""
	I1212 01:10:41.478216  203848 logs.go:282] 0 containers: []
	W1212 01:10:41.478225  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:10:41.478232  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:10:41.478296  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:10:41.504344  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:10:41.504369  203848 cri.go:89] found id: ""
	I1212 01:10:41.504377  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:10:41.504437  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:41.508313  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:10:41.508421  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:10:41.532591  203848 cri.go:89] found id: ""
	I1212 01:10:41.532619  203848 logs.go:282] 0 containers: []
	W1212 01:10:41.532634  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:10:41.532641  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:10:41.532699  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:10:41.559633  203848 cri.go:89] found id: ""
	I1212 01:10:41.559662  203848 logs.go:282] 0 containers: []
	W1212 01:10:41.559672  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:10:41.559687  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:10:41.559703  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:10:41.622216  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:10:41.622235  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:10:41.622248  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:10:41.664851  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:10:41.664882  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:10:41.727701  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:10:41.727738  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:10:41.743449  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:10:41.743487  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:10:41.790213  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:10:41.790248  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:10:41.860421  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:10:41.860451  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:10:41.923143  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:10:41.923176  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:10:41.954805  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:10:41.954841  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:10:44.501144  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:10:44.511387  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:10:44.511464  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:10:44.536511  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:10:44.536532  203848 cri.go:89] found id: ""
	I1212 01:10:44.536541  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:10:44.536597  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:44.540392  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:10:44.540468  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:10:44.567269  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:10:44.567293  203848 cri.go:89] found id: ""
	I1212 01:10:44.567303  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:10:44.567359  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:44.571281  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:10:44.571355  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:10:44.600948  203848 cri.go:89] found id: ""
	I1212 01:10:44.600972  203848 logs.go:282] 0 containers: []
	W1212 01:10:44.600980  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:10:44.600987  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:10:44.601048  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:10:44.626257  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:10:44.626286  203848 cri.go:89] found id: ""
	I1212 01:10:44.626295  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:10:44.626371  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:44.630272  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:10:44.630347  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:10:44.656180  203848 cri.go:89] found id: ""
	I1212 01:10:44.656203  203848 logs.go:282] 0 containers: []
	W1212 01:10:44.656212  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:10:44.656218  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:10:44.656283  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:10:44.683586  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:10:44.683614  203848 cri.go:89] found id: ""
	I1212 01:10:44.683623  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:10:44.683691  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:44.687464  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:10:44.687538  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:10:44.714242  203848 cri.go:89] found id: ""
	I1212 01:10:44.714267  203848 logs.go:282] 0 containers: []
	W1212 01:10:44.714277  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:10:44.714284  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:10:44.714345  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:10:44.741832  203848 cri.go:89] found id: ""
	I1212 01:10:44.741852  203848 logs.go:282] 0 containers: []
	W1212 01:10:44.741860  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:10:44.741871  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:10:44.741889  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:10:44.803688  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:10:44.803756  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:10:44.868424  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:10:44.868501  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:10:44.939942  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:10:44.939970  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:10:44.963993  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:10:44.964018  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:10:45.008610  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:10:45.008702  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:10:45.052900  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:10:45.052984  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:10:45.097433  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:10:45.097543  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:10:45.145554  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:10:45.145598  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:10:45.274898  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:10:47.775100  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:10:47.785426  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:10:47.785503  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:10:47.813052  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:10:47.813078  203848 cri.go:89] found id: ""
	I1212 01:10:47.813086  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:10:47.813145  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:47.817358  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:10:47.817429  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:10:47.850029  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:10:47.850051  203848 cri.go:89] found id: ""
	I1212 01:10:47.850060  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:10:47.850119  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:47.854044  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:10:47.854114  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:10:47.883877  203848 cri.go:89] found id: ""
	I1212 01:10:47.883899  203848 logs.go:282] 0 containers: []
	W1212 01:10:47.883908  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:10:47.883914  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:10:47.883971  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:10:47.910093  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:10:47.910116  203848 cri.go:89] found id: ""
	I1212 01:10:47.910124  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:10:47.910180  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:47.913839  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:10:47.913910  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:10:47.941312  203848 cri.go:89] found id: ""
	I1212 01:10:47.941337  203848 logs.go:282] 0 containers: []
	W1212 01:10:47.941346  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:10:47.941351  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:10:47.941410  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:10:47.965726  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:10:47.965747  203848 cri.go:89] found id: ""
	I1212 01:10:47.965755  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:10:47.965816  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:47.969541  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:10:47.969648  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:10:47.996430  203848 cri.go:89] found id: ""
	I1212 01:10:47.996453  203848 logs.go:282] 0 containers: []
	W1212 01:10:47.996461  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:10:47.996467  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:10:47.996554  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:10:48.026681  203848 cri.go:89] found id: ""
	I1212 01:10:48.026746  203848 logs.go:282] 0 containers: []
	W1212 01:10:48.026771  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:10:48.026801  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:10:48.026835  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:10:48.085030  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:10:48.085067  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:10:48.151902  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:10:48.151926  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:10:48.151939  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:10:48.184107  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:10:48.184138  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:10:48.215253  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:10:48.215281  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:10:48.248479  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:10:48.248510  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:10:48.277512  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:10:48.277550  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:10:48.317249  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:10:48.317286  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:10:48.330397  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:10:48.330423  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:10:50.878621  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:10:50.888461  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:10:50.888533  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:10:50.912361  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:10:50.912385  203848 cri.go:89] found id: ""
	I1212 01:10:50.912393  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:10:50.912457  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:50.916130  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:10:50.916207  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:10:50.941166  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:10:50.941188  203848 cri.go:89] found id: ""
	I1212 01:10:50.941195  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:10:50.941252  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:50.944818  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:10:50.944897  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:10:50.969321  203848 cri.go:89] found id: ""
	I1212 01:10:50.969343  203848 logs.go:282] 0 containers: []
	W1212 01:10:50.969352  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:10:50.969358  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:10:50.969422  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:10:50.999271  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:10:50.999293  203848 cri.go:89] found id: ""
	I1212 01:10:50.999301  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:10:50.999365  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:51.003879  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:10:51.003971  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:10:51.033002  203848 cri.go:89] found id: ""
	I1212 01:10:51.033027  203848 logs.go:282] 0 containers: []
	W1212 01:10:51.033037  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:10:51.033043  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:10:51.033103  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:10:51.061291  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:10:51.061314  203848 cri.go:89] found id: ""
	I1212 01:10:51.061323  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:10:51.061385  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:51.065246  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:10:51.065319  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:10:51.093325  203848 cri.go:89] found id: ""
	I1212 01:10:51.093348  203848 logs.go:282] 0 containers: []
	W1212 01:10:51.093362  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:10:51.093369  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:10:51.093432  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:10:51.118226  203848 cri.go:89] found id: ""
	I1212 01:10:51.118249  203848 logs.go:282] 0 containers: []
	W1212 01:10:51.118258  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:10:51.118273  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:10:51.118285  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:10:51.176951  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:10:51.176988  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:10:51.189795  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:10:51.189823  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:10:51.251587  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:10:51.251608  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:10:51.251619  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:10:51.280059  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:10:51.280091  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:10:51.319920  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:10:51.319947  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:10:51.353906  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:10:51.353941  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:10:51.390712  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:10:51.390743  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:10:51.419302  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:10:51.419333  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:10:53.950925  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:10:53.961174  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:10:53.961242  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:10:53.988742  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:10:53.988762  203848 cri.go:89] found id: ""
	I1212 01:10:53.988770  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:10:53.988838  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:53.992595  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:10:53.992668  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:10:54.021682  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:10:54.021708  203848 cri.go:89] found id: ""
	I1212 01:10:54.021718  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:10:54.021780  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:54.025821  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:10:54.025899  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:10:54.052194  203848 cri.go:89] found id: ""
	I1212 01:10:54.052220  203848 logs.go:282] 0 containers: []
	W1212 01:10:54.052229  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:10:54.052235  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:10:54.052301  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:10:54.076098  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:10:54.076120  203848 cri.go:89] found id: ""
	I1212 01:10:54.076128  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:10:54.076184  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:54.080124  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:10:54.080223  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:10:54.104565  203848 cri.go:89] found id: ""
	I1212 01:10:54.104590  203848 logs.go:282] 0 containers: []
	W1212 01:10:54.104600  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:10:54.104607  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:10:54.104669  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:10:54.129233  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:10:54.129256  203848 cri.go:89] found id: ""
	I1212 01:10:54.129265  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:10:54.129325  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:54.133048  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:10:54.133145  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:10:54.156487  203848 cri.go:89] found id: ""
	I1212 01:10:54.156510  203848 logs.go:282] 0 containers: []
	W1212 01:10:54.156519  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:10:54.156525  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:10:54.156585  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:10:54.179963  203848 cri.go:89] found id: ""
	I1212 01:10:54.179990  203848 logs.go:282] 0 containers: []
	W1212 01:10:54.179998  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:10:54.180011  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:10:54.180023  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:10:54.236463  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:10:54.236498  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:10:54.301999  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:10:54.302064  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:10:54.302105  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:10:54.343037  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:10:54.343072  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:10:54.372280  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:10:54.372309  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:10:54.385023  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:10:54.385058  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:10:54.423526  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:10:54.423559  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:10:54.460160  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:10:54.460193  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:10:54.492674  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:10:54.492703  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:10:57.020999  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:10:57.031717  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:10:57.031795  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:10:57.058554  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:10:57.058579  203848 cri.go:89] found id: ""
	I1212 01:10:57.058588  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:10:57.058645  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:57.062434  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:10:57.062513  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:10:57.091630  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:10:57.091652  203848 cri.go:89] found id: ""
	I1212 01:10:57.091660  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:10:57.091732  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:57.095447  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:10:57.095549  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:10:57.119594  203848 cri.go:89] found id: ""
	I1212 01:10:57.119617  203848 logs.go:282] 0 containers: []
	W1212 01:10:57.119626  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:10:57.119632  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:10:57.119691  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:10:57.144325  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:10:57.144344  203848 cri.go:89] found id: ""
	I1212 01:10:57.144353  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:10:57.144411  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:57.148302  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:10:57.148378  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:10:57.172685  203848 cri.go:89] found id: ""
	I1212 01:10:57.172707  203848 logs.go:282] 0 containers: []
	W1212 01:10:57.172715  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:10:57.172721  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:10:57.172784  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:10:57.208354  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:10:57.208418  203848 cri.go:89] found id: ""
	I1212 01:10:57.208439  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:10:57.208520  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:10:57.212336  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:10:57.212409  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:10:57.237271  203848 cri.go:89] found id: ""
	I1212 01:10:57.237345  203848 logs.go:282] 0 containers: []
	W1212 01:10:57.237370  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:10:57.237388  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:10:57.237478  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:10:57.262612  203848 cri.go:89] found id: ""
	I1212 01:10:57.262632  203848 logs.go:282] 0 containers: []
	W1212 01:10:57.262640  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:10:57.262655  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:10:57.262666  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:10:57.275507  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:10:57.275538  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:10:57.303263  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:10:57.303293  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:10:57.334157  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:10:57.334186  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:10:57.363485  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:10:57.363521  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:10:57.394391  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:10:57.394418  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:10:57.452210  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:10:57.452245  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:10:57.521617  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:10:57.521635  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:10:57.521652  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:10:57.572506  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:10:57.572544  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:11:00.116699  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:11:00.181554  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:11:00.181637  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:11:00.243818  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:11:00.243855  203848 cri.go:89] found id: ""
	I1212 01:11:00.243894  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:11:00.243965  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:11:00.251304  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:11:00.251435  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:11:00.304357  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:11:00.304389  203848 cri.go:89] found id: ""
	I1212 01:11:00.304399  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:11:00.304468  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:11:00.310193  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:11:00.310320  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:11:00.372696  203848 cri.go:89] found id: ""
	I1212 01:11:00.372776  203848 logs.go:282] 0 containers: []
	W1212 01:11:00.372800  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:11:00.372822  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:11:00.372933  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:11:00.402217  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:11:00.402291  203848 cri.go:89] found id: ""
	I1212 01:11:00.402315  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:11:00.402407  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:11:00.406828  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:11:00.406962  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:11:00.438521  203848 cri.go:89] found id: ""
	I1212 01:11:00.438600  203848 logs.go:282] 0 containers: []
	W1212 01:11:00.438625  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:11:00.438646  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:11:00.438729  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:11:00.466232  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:11:00.466294  203848 cri.go:89] found id: ""
	I1212 01:11:00.466315  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:11:00.466399  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:11:00.470569  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:11:00.470708  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:11:00.496858  203848 cri.go:89] found id: ""
	I1212 01:11:00.496924  203848 logs.go:282] 0 containers: []
	W1212 01:11:00.496940  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:11:00.496947  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:11:00.497020  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:11:00.522351  203848 cri.go:89] found id: ""
	I1212 01:11:00.522376  203848 logs.go:282] 0 containers: []
	W1212 01:11:00.522385  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:11:00.522401  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:11:00.522413  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:11:00.585338  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:11:00.585378  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:11:00.599075  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:11:00.599149  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:11:00.669021  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:11:00.669042  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:11:00.669056  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:11:00.705857  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:11:00.705888  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:11:00.738015  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:11:00.738047  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:11:00.767505  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:11:00.767535  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:11:00.802973  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:11:00.803030  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:11:00.831764  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:11:00.831798  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:11:03.372776  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:11:03.382886  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:11:03.382958  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:11:03.411524  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:11:03.411546  203848 cri.go:89] found id: ""
	I1212 01:11:03.411555  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:11:03.411611  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:11:03.415254  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:11:03.415323  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:11:03.440944  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:11:03.440966  203848 cri.go:89] found id: ""
	I1212 01:11:03.440974  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:11:03.441029  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:11:03.444737  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:11:03.444811  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:11:03.468811  203848 cri.go:89] found id: ""
	I1212 01:11:03.468888  203848 logs.go:282] 0 containers: []
	W1212 01:11:03.468903  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:11:03.468917  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:11:03.468989  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:11:03.494187  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:11:03.494208  203848 cri.go:89] found id: ""
	I1212 01:11:03.494216  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:11:03.494271  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:11:03.497970  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:11:03.498047  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:11:03.524939  203848 cri.go:89] found id: ""
	I1212 01:11:03.524962  203848 logs.go:282] 0 containers: []
	W1212 01:11:03.524972  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:11:03.524978  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:11:03.525066  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:11:03.556850  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:11:03.556872  203848 cri.go:89] found id: ""
	I1212 01:11:03.556881  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:11:03.556956  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:11:03.561071  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:11:03.561194  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:11:03.587115  203848 cri.go:89] found id: ""
	I1212 01:11:03.587183  203848 logs.go:282] 0 containers: []
	W1212 01:11:03.587208  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:11:03.587229  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:11:03.587318  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:11:03.614779  203848 cri.go:89] found id: ""
	I1212 01:11:03.614852  203848 logs.go:282] 0 containers: []
	W1212 01:11:03.614874  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:11:03.614903  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:11:03.614941  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:11:03.658281  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:11:03.658306  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:11:03.697456  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:11:03.697488  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:11:03.725994  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:11:03.726022  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:11:03.788300  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:11:03.788339  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:11:03.802031  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:11:03.802061  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:11:03.838079  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:11:03.838113  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:11:03.870237  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:11:03.870273  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:11:03.939673  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:11:03.939691  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:11:03.939703  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:11:06.473009  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:11:06.484180  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:11:06.484255  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:11:06.526165  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:11:06.526188  203848 cri.go:89] found id: ""
	I1212 01:11:06.526197  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:11:06.526255  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:11:06.530496  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:11:06.530573  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:11:06.589305  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:11:06.589328  203848 cri.go:89] found id: ""
	I1212 01:11:06.589337  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:11:06.589399  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:11:06.595466  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:11:06.595540  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:11:06.644041  203848 cri.go:89] found id: ""
	I1212 01:11:06.644066  203848 logs.go:282] 0 containers: []
	W1212 01:11:06.644075  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:11:06.644081  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:11:06.644144  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:11:06.669562  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:11:06.669588  203848 cri.go:89] found id: ""
	I1212 01:11:06.669597  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:11:06.669655  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:11:06.673271  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:11:06.673347  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:11:06.698773  203848 cri.go:89] found id: ""
	I1212 01:11:06.698800  203848 logs.go:282] 0 containers: []
	W1212 01:11:06.698810  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:11:06.698816  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:11:06.698874  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:11:06.725053  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:11:06.725087  203848 cri.go:89] found id: ""
	I1212 01:11:06.725096  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:11:06.725153  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:11:06.728981  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:11:06.729053  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:11:06.755820  203848 cri.go:89] found id: ""
	I1212 01:11:06.755843  203848 logs.go:282] 0 containers: []
	W1212 01:11:06.755851  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:11:06.755857  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:11:06.755923  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:11:06.785781  203848 cri.go:89] found id: ""
	I1212 01:11:06.785814  203848 logs.go:282] 0 containers: []
	W1212 01:11:06.785825  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:11:06.785842  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:11:06.785859  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:11:06.847613  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:11:06.847647  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:11:06.915260  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:11:06.915320  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:11:06.915349  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:11:06.959615  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:11:06.959649  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:11:06.995794  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:11:06.995829  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:11:07.026295  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:11:07.026329  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:11:07.056352  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:11:07.056382  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:11:07.070170  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:11:07.070257  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:11:07.098061  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:11:07.098089  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:11:09.633184  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:11:09.644854  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:11:09.644923  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:11:09.676046  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:11:09.676065  203848 cri.go:89] found id: ""
	I1212 01:11:09.676073  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:11:09.676132  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:11:09.679912  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:11:09.679982  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:11:09.722488  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:11:09.722507  203848 cri.go:89] found id: ""
	I1212 01:11:09.722515  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:11:09.722593  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:11:09.726453  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:11:09.726568  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:11:09.764178  203848 cri.go:89] found id: ""
	I1212 01:11:09.764256  203848 logs.go:282] 0 containers: []
	W1212 01:11:09.764284  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:11:09.764327  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:11:09.764446  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:11:09.801043  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:11:09.801070  203848 cri.go:89] found id: ""
	I1212 01:11:09.801078  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:11:09.801141  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:11:09.811721  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:11:09.811815  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:11:09.841904  203848 cri.go:89] found id: ""
	I1212 01:11:09.841935  203848 logs.go:282] 0 containers: []
	W1212 01:11:09.841946  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:11:09.841953  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:11:09.842020  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:11:09.876931  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:11:09.876950  203848 cri.go:89] found id: ""
	I1212 01:11:09.876958  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:11:09.877025  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:11:09.881709  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:11:09.881804  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:11:09.918851  203848 cri.go:89] found id: ""
	I1212 01:11:09.918874  203848 logs.go:282] 0 containers: []
	W1212 01:11:09.918886  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:11:09.918893  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:11:09.918957  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:11:09.947017  203848 cri.go:89] found id: ""
	I1212 01:11:09.947042  203848 logs.go:282] 0 containers: []
	W1212 01:11:09.947051  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:11:09.947073  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:11:09.947085  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:11:09.960254  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:11:09.960278  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:11:10.055579  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:11:10.055597  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:11:10.055625  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:11:10.084913  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:11:10.084981  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:11:10.134349  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:11:10.134382  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:11:10.204672  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:11:10.204791  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:11:10.247506  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:11:10.247575  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:11:10.283845  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:11:10.283918  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:11:10.318511  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:11:10.318604  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:11:12.851746  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:11:12.864190  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:11:12.864255  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:11:12.899957  203848 cri.go:89] found id: "8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:11:12.899979  203848 cri.go:89] found id: ""
	I1212 01:11:12.899986  203848 logs.go:282] 1 containers: [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3]
	I1212 01:11:12.900044  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:11:12.904199  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:11:12.904272  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:11:12.938556  203848 cri.go:89] found id: "b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:11:12.938576  203848 cri.go:89] found id: ""
	I1212 01:11:12.938585  203848 logs.go:282] 1 containers: [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74]
	I1212 01:11:12.938657  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:11:12.942894  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:11:12.942964  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:11:12.975362  203848 cri.go:89] found id: ""
	I1212 01:11:12.975380  203848 logs.go:282] 0 containers: []
	W1212 01:11:12.975387  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:11:12.975393  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:11:12.975443  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:11:13.011601  203848 cri.go:89] found id: "926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:11:13.011622  203848 cri.go:89] found id: ""
	I1212 01:11:13.011630  203848 logs.go:282] 1 containers: [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1]
	I1212 01:11:13.011690  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:11:13.015833  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:11:13.015905  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:11:13.045813  203848 cri.go:89] found id: ""
	I1212 01:11:13.045835  203848 logs.go:282] 0 containers: []
	W1212 01:11:13.045849  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:11:13.045856  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:11:13.045914  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:11:13.079164  203848 cri.go:89] found id: "3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:11:13.079202  203848 cri.go:89] found id: ""
	I1212 01:11:13.079210  203848 logs.go:282] 1 containers: [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3]
	I1212 01:11:13.079279  203848 ssh_runner.go:195] Run: which crictl
	I1212 01:11:13.084801  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:11:13.084945  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:11:13.116542  203848 cri.go:89] found id: ""
	I1212 01:11:13.116611  203848 logs.go:282] 0 containers: []
	W1212 01:11:13.116636  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:11:13.116656  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:11:13.116740  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:11:13.156762  203848 cri.go:89] found id: ""
	I1212 01:11:13.156835  203848 logs.go:282] 0 containers: []
	W1212 01:11:13.156871  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:11:13.156902  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:11:13.156929  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:11:13.240660  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:11:13.240751  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:11:13.254488  203848 logs.go:123] Gathering logs for kube-apiserver [8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3] ...
	I1212 01:11:13.254515  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3"
	I1212 01:11:13.305430  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:11:13.305509  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:11:13.358001  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:11:13.358037  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:11:13.400022  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:11:13.400047  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:11:13.507476  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:11:13.507496  203848 logs.go:123] Gathering logs for etcd [b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74] ...
	I1212 01:11:13.507508  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74"
	I1212 01:11:13.556164  203848 logs.go:123] Gathering logs for kube-scheduler [926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1] ...
	I1212 01:11:13.556238  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1"
	I1212 01:11:13.597933  203848 logs.go:123] Gathering logs for kube-controller-manager [3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3] ...
	I1212 01:11:13.598144  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3"
	I1212 01:11:16.165909  203848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:11:16.175672  203848 kubeadm.go:602] duration metric: took 4m3.184092771s to restartPrimaryControlPlane
	W1212 01:11:16.175734  203848 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 01:11:16.175801  203848 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1212 01:11:16.694282  203848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:11:16.709951  203848 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:11:16.723073  203848 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 01:11:16.723142  203848 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:11:16.733962  203848 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:11:16.733984  203848 kubeadm.go:158] found existing configuration files:
	
	I1212 01:11:16.734036  203848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:11:16.742830  203848 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:11:16.742945  203848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:11:16.751231  203848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:11:16.759695  203848 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:11:16.759762  203848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:11:16.767572  203848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:11:16.775541  203848 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:11:16.775650  203848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:11:16.784028  203848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:11:16.793266  203848 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:11:16.793378  203848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:11:16.801043  203848 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 01:11:16.854381  203848 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 01:11:16.855056  203848 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 01:11:16.980530  203848 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 01:11:16.980684  203848 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 01:11:16.980752  203848 kubeadm.go:319] OS: Linux
	I1212 01:11:16.980826  203848 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 01:11:16.980915  203848 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 01:11:16.980995  203848 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 01:11:16.981078  203848 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 01:11:16.981161  203848 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 01:11:16.981243  203848 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 01:11:16.981324  203848 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 01:11:16.981406  203848 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 01:11:16.981508  203848 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 01:11:17.070774  203848 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:11:17.070928  203848 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:11:17.071056  203848 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:11:27.119370  203848 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:11:27.122359  203848 out.go:252]   - Generating certificates and keys ...
	I1212 01:11:27.122455  203848 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 01:11:27.122523  203848 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 01:11:27.122600  203848 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:11:27.122662  203848 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:11:27.122733  203848 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:11:27.122788  203848 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 01:11:27.122855  203848 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:11:27.123120  203848 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:11:27.123338  203848 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:11:27.123557  203848 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:11:27.123757  203848 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 01:11:27.123818  203848 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:11:27.319764  203848 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:11:27.543636  203848 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 01:11:27.833755  203848 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:11:28.054613  203848 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:11:28.313069  203848 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:11:28.313535  203848 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:11:28.316299  203848 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:11:28.319457  203848 out.go:252]   - Booting up control plane ...
	I1212 01:11:28.319554  203848 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:11:28.319633  203848 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:11:28.319700  203848 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:11:28.347324  203848 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:11:28.347437  203848 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 01:11:28.356145  203848 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 01:11:28.356238  203848 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:11:28.356280  203848 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 01:11:28.493541  203848 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 01:11:28.493675  203848 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 01:15:28.494607  203848 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001315222s
	I1212 01:15:28.494640  203848 kubeadm.go:319] 
	I1212 01:15:28.494698  203848 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 01:15:28.494731  203848 kubeadm.go:319] 	- The kubelet is not running
	I1212 01:15:28.494836  203848 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 01:15:28.494842  203848 kubeadm.go:319] 
	I1212 01:15:28.494946  203848 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 01:15:28.494978  203848 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 01:15:28.495026  203848 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 01:15:28.495031  203848 kubeadm.go:319] 
	I1212 01:15:28.499060  203848 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 01:15:28.499478  203848 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 01:15:28.499582  203848 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:15:28.499841  203848 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 01:15:28.499848  203848 kubeadm.go:319] 
	I1212 01:15:28.499914  203848 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1212 01:15:28.500019  203848 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001315222s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001315222s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 01:15:28.500097  203848 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1212 01:15:28.928605  203848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:15:28.948092  203848 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 01:15:28.948158  203848 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:15:28.961923  203848 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:15:28.961940  203848 kubeadm.go:158] found existing configuration files:
	
	I1212 01:15:28.962002  203848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:15:28.971464  203848 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:15:28.971529  203848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:15:28.979863  203848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:15:28.990798  203848 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:15:28.990864  203848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:15:29.000424  203848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:15:29.021596  203848 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:15:29.021660  203848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:15:29.030773  203848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:15:29.047258  203848 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:15:29.047319  203848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:15:29.058309  203848 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 01:15:29.131399  203848 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 01:15:29.131560  203848 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 01:15:29.224218  203848 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 01:15:29.224291  203848 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 01:15:29.224326  203848 kubeadm.go:319] OS: Linux
	I1212 01:15:29.224368  203848 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 01:15:29.224413  203848 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 01:15:29.224458  203848 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 01:15:29.224503  203848 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 01:15:29.224548  203848 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 01:15:29.224626  203848 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 01:15:29.224700  203848 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 01:15:29.224758  203848 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 01:15:29.224803  203848 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 01:15:29.325906  203848 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:15:29.326016  203848 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:15:29.326101  203848 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:15:29.341431  203848 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:15:29.346549  203848 out.go:252]   - Generating certificates and keys ...
	I1212 01:15:29.346646  203848 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 01:15:29.346719  203848 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 01:15:29.346801  203848 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:15:29.346866  203848 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:15:29.346941  203848 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:15:29.347008  203848 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 01:15:29.347075  203848 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:15:29.347136  203848 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:15:29.347210  203848 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:15:29.347282  203848 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:15:29.347320  203848 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 01:15:29.347375  203848 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:15:29.559180  203848 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:15:29.622657  203848 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 01:15:30.464675  203848 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:15:30.954241  203848 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:15:31.527707  203848 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:15:31.527827  203848 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:15:31.527910  203848 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:15:31.537207  203848 out.go:252]   - Booting up control plane ...
	I1212 01:15:31.537316  203848 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:15:31.537396  203848 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:15:31.537470  203848 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:15:31.573989  203848 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:15:31.574096  203848 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 01:15:31.583928  203848 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 01:15:31.584036  203848 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:15:31.584080  203848 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 01:15:31.738381  203848 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 01:15:31.738504  203848 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 01:19:31.736902  203848 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000799957s
	I1212 01:19:31.736934  203848 kubeadm.go:319] 
	I1212 01:19:31.736992  203848 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 01:19:31.737026  203848 kubeadm.go:319] 	- The kubelet is not running
	I1212 01:19:31.737130  203848 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 01:19:31.737136  203848 kubeadm.go:319] 
	I1212 01:19:31.737241  203848 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 01:19:31.737274  203848 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 01:19:31.737304  203848 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 01:19:31.737309  203848 kubeadm.go:319] 
	I1212 01:19:31.740266  203848 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 01:19:31.740698  203848 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 01:19:31.740807  203848 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:19:31.741069  203848 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 01:19:31.741075  203848 kubeadm.go:319] 
	I1212 01:19:31.741143  203848 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 01:19:31.741197  203848 kubeadm.go:403] duration metric: took 12m18.808665943s to StartCluster
	I1212 01:19:31.741229  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:19:31.741295  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:19:31.776138  203848 cri.go:89] found id: ""
	I1212 01:19:31.776160  203848 logs.go:282] 0 containers: []
	W1212 01:19:31.776168  203848 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:19:31.776174  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:19:31.776234  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:19:31.826147  203848 cri.go:89] found id: ""
	I1212 01:19:31.826169  203848 logs.go:282] 0 containers: []
	W1212 01:19:31.826177  203848 logs.go:284] No container was found matching "etcd"
	I1212 01:19:31.826183  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:19:31.826246  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:19:31.878291  203848 cri.go:89] found id: ""
	I1212 01:19:31.878313  203848 logs.go:282] 0 containers: []
	W1212 01:19:31.878322  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:19:31.878329  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:19:31.878390  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:19:31.910681  203848 cri.go:89] found id: ""
	I1212 01:19:31.910703  203848 logs.go:282] 0 containers: []
	W1212 01:19:31.910711  203848 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:19:31.910717  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:19:31.910772  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:19:31.945031  203848 cri.go:89] found id: ""
	I1212 01:19:31.945055  203848 logs.go:282] 0 containers: []
	W1212 01:19:31.945065  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:19:31.945071  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:19:31.945129  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:19:31.981346  203848 cri.go:89] found id: ""
	I1212 01:19:31.981412  203848 logs.go:282] 0 containers: []
	W1212 01:19:31.981435  203848 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:19:31.981455  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:19:31.981541  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:19:32.017546  203848 cri.go:89] found id: ""
	I1212 01:19:32.017570  203848 logs.go:282] 0 containers: []
	W1212 01:19:32.017579  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:19:32.017604  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:19:32.017683  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:19:32.065393  203848 cri.go:89] found id: ""
	I1212 01:19:32.065469  203848 logs.go:282] 0 containers: []
	W1212 01:19:32.065492  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:19:32.065516  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:19:32.065556  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:19:32.145353  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:19:32.145418  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:19:32.209552  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:19:32.209584  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:19:32.225719  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:19:32.225743  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:19:32.309077  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:19:32.309147  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:19:32.309174  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1212 01:19:32.356920  203848 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000799957s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 01:19:32.357024  203848 out.go:285] * 
	* 
	W1212 01:19:32.357223  203848 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000799957s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000799957s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 01:19:32.357279  203848 out.go:285] * 
	* 
	W1212 01:19:32.359555  203848 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 01:19:32.367165  203848 out.go:203] 
	W1212 01:19:32.371117  203848 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000799957s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000799957s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 01:19:32.371345  203848 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 01:19:32.371406  203848 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 01:19:32.374965  203848 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-linux-arm64 start -p kubernetes-upgrade-439215 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-439215 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-439215 version --output=json: exit status 1 (113.541891ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "33",
	    "gitVersion": "v1.33.2",
	    "gitCommit": "a57b6f7709f6c2722b92f07b8b4c48210a51fc40",
	    "gitTreeState": "clean",
	    "buildDate": "2025-06-17T18:41:31Z",
	    "goVersion": "go1.24.4",
	    "compiler": "gc",
	    "platform": "linux/arm64"
	  },
	  "kustomizeVersion": "v5.6.0"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.76.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-12 01:19:33.488931187 +0000 UTC m=+5023.092009792
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect kubernetes-upgrade-439215
helpers_test.go:244: (dbg) docker inspect kubernetes-upgrade-439215:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0faaece17446a5610682c5a640ccfd8b735ca71525988acd798d85d88a37eddd",
	        "Created": "2025-12-12T01:06:21.126751149Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 204001,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T01:06:58.55553292Z",
	            "FinishedAt": "2025-12-12T01:06:57.465734892Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/0faaece17446a5610682c5a640ccfd8b735ca71525988acd798d85d88a37eddd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0faaece17446a5610682c5a640ccfd8b735ca71525988acd798d85d88a37eddd/hostname",
	        "HostsPath": "/var/lib/docker/containers/0faaece17446a5610682c5a640ccfd8b735ca71525988acd798d85d88a37eddd/hosts",
	        "LogPath": "/var/lib/docker/containers/0faaece17446a5610682c5a640ccfd8b735ca71525988acd798d85d88a37eddd/0faaece17446a5610682c5a640ccfd8b735ca71525988acd798d85d88a37eddd-json.log",
	        "Name": "/kubernetes-upgrade-439215",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-439215:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-439215",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0faaece17446a5610682c5a640ccfd8b735ca71525988acd798d85d88a37eddd",
	                "LowerDir": "/var/lib/docker/overlay2/50f219a11d4296bf0226d7bf9653b675305b4f2a06ed7fe8bf161e3425ed8664-init/diff:/var/lib/docker/overlay2/4c0e5370e4fd7b4e6c6a79620ef377d7d55826709cd277e0cfa49c6005af0314/diff",
	                "MergedDir": "/var/lib/docker/overlay2/50f219a11d4296bf0226d7bf9653b675305b4f2a06ed7fe8bf161e3425ed8664/merged",
	                "UpperDir": "/var/lib/docker/overlay2/50f219a11d4296bf0226d7bf9653b675305b4f2a06ed7fe8bf161e3425ed8664/diff",
	                "WorkDir": "/var/lib/docker/overlay2/50f219a11d4296bf0226d7bf9653b675305b4f2a06ed7fe8bf161e3425ed8664/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-439215",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-439215/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-439215",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-439215",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-439215",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c92ac0f1c7ed3171a7a446a8acac8baa9b6c7f8fab6b8c73897fb02d46a6fd78",
	            "SandboxKey": "/var/run/docker/netns/c92ac0f1c7ed",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33013"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33014"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33017"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33015"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33016"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-439215": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "86:80:53:a4:9f:46",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a6da0dbcd8e9e9b207a47ffc3f98663b6d205dc160e23cd315c8df27fa47a8d6",
	                    "EndpointID": "acfaae52e9753b629fb983e37e97f8577ed1f08335e7b068e103e86d7cc6aa79",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-439215",
	                        "0faaece17446"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-439215 -n kubernetes-upgrade-439215
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-439215 -n kubernetes-upgrade-439215: exit status 2 (426.183363ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-439215 logs -n 25
helpers_test.go:261: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                              ARGS                                                                                                               │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-341847 sudo systemctl cat docker --no-pager                                                                                                                                                                           │ cilium-341847            │ jenkins │ v1.37.0 │ 12 Dec 25 01:14 UTC │                     │
	│ ssh     │ -p cilium-341847 sudo cat /etc/docker/daemon.json                                                                                                                                                                               │ cilium-341847            │ jenkins │ v1.37.0 │ 12 Dec 25 01:14 UTC │                     │
	│ ssh     │ -p cilium-341847 sudo docker system info                                                                                                                                                                                        │ cilium-341847            │ jenkins │ v1.37.0 │ 12 Dec 25 01:14 UTC │                     │
	│ ssh     │ -p cilium-341847 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                       │ cilium-341847            │ jenkins │ v1.37.0 │ 12 Dec 25 01:14 UTC │                     │
	│ ssh     │ -p cilium-341847 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                       │ cilium-341847            │ jenkins │ v1.37.0 │ 12 Dec 25 01:14 UTC │                     │
	│ ssh     │ -p cilium-341847 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                  │ cilium-341847            │ jenkins │ v1.37.0 │ 12 Dec 25 01:14 UTC │                     │
	│ ssh     │ -p cilium-341847 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                            │ cilium-341847            │ jenkins │ v1.37.0 │ 12 Dec 25 01:14 UTC │                     │
	│ ssh     │ -p cilium-341847 sudo cri-dockerd --version                                                                                                                                                                                     │ cilium-341847            │ jenkins │ v1.37.0 │ 12 Dec 25 01:14 UTC │                     │
	│ ssh     │ -p cilium-341847 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                       │ cilium-341847            │ jenkins │ v1.37.0 │ 12 Dec 25 01:14 UTC │                     │
	│ ssh     │ -p cilium-341847 sudo systemctl cat containerd --no-pager                                                                                                                                                                       │ cilium-341847            │ jenkins │ v1.37.0 │ 12 Dec 25 01:14 UTC │                     │
	│ ssh     │ -p cilium-341847 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                │ cilium-341847            │ jenkins │ v1.37.0 │ 12 Dec 25 01:14 UTC │                     │
	│ ssh     │ -p cilium-341847 sudo cat /etc/containerd/config.toml                                                                                                                                                                           │ cilium-341847            │ jenkins │ v1.37.0 │ 12 Dec 25 01:14 UTC │                     │
	│ ssh     │ -p cilium-341847 sudo containerd config dump                                                                                                                                                                                    │ cilium-341847            │ jenkins │ v1.37.0 │ 12 Dec 25 01:14 UTC │                     │
	│ ssh     │ -p cilium-341847 sudo systemctl status crio --all --full --no-pager                                                                                                                                                             │ cilium-341847            │ jenkins │ v1.37.0 │ 12 Dec 25 01:14 UTC │                     │
	│ ssh     │ -p cilium-341847 sudo systemctl cat crio --no-pager                                                                                                                                                                             │ cilium-341847            │ jenkins │ v1.37.0 │ 12 Dec 25 01:14 UTC │                     │
	│ ssh     │ -p cilium-341847 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                   │ cilium-341847            │ jenkins │ v1.37.0 │ 12 Dec 25 01:14 UTC │                     │
	│ ssh     │ -p cilium-341847 sudo crio config                                                                                                                                                                                               │ cilium-341847            │ jenkins │ v1.37.0 │ 12 Dec 25 01:14 UTC │                     │
	│ delete  │ -p cilium-341847                                                                                                                                                                                                                │ cilium-341847            │ jenkins │ v1.37.0 │ 12 Dec 25 01:14 UTC │ 12 Dec 25 01:14 UTC │
	│ start   │ -p force-systemd-env-104389 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                │ force-systemd-env-104389 │ jenkins │ v1.37.0 │ 12 Dec 25 01:14 UTC │ 12 Dec 25 01:15 UTC │
	│ ssh     │ force-systemd-env-104389 ssh cat /etc/containerd/config.toml                                                                                                                                                                    │ force-systemd-env-104389 │ jenkins │ v1.37.0 │ 12 Dec 25 01:15 UTC │ 12 Dec 25 01:15 UTC │
	│ delete  │ -p force-systemd-env-104389                                                                                                                                                                                                     │ force-systemd-env-104389 │ jenkins │ v1.37.0 │ 12 Dec 25 01:15 UTC │ 12 Dec 25 01:15 UTC │
	│ start   │ -p cert-expiration-262857 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                    │ cert-expiration-262857   │ jenkins │ v1.37.0 │ 12 Dec 25 01:15 UTC │ 12 Dec 25 01:16 UTC │
	│ start   │ -p cert-expiration-262857 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                 │ cert-expiration-262857   │ jenkins │ v1.37.0 │ 12 Dec 25 01:19 UTC │ 12 Dec 25 01:19 UTC │
	│ delete  │ -p cert-expiration-262857                                                                                                                                                                                                       │ cert-expiration-262857   │ jenkins │ v1.37.0 │ 12 Dec 25 01:19 UTC │ 12 Dec 25 01:19 UTC │
	│ start   │ -p cert-options-805684 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd │ cert-options-805684      │ jenkins │ v1.37.0 │ 12 Dec 25 01:19 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 01:19:13
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 01:19:13.316810  247401 out.go:360] Setting OutFile to fd 1 ...
	I1212 01:19:13.316962  247401 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:19:13.316967  247401 out.go:374] Setting ErrFile to fd 2...
	I1212 01:19:13.316970  247401 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:19:13.317263  247401 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	I1212 01:19:13.317737  247401 out.go:368] Setting JSON to false
	I1212 01:19:13.318699  247401 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7300,"bootTime":1765495054,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1212 01:19:13.318754  247401 start.go:143] virtualization:  
	I1212 01:19:13.322352  247401 out.go:179] * [cert-options-805684] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 01:19:13.326750  247401 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 01:19:13.326839  247401 notify.go:221] Checking for updates...
	I1212 01:19:13.331685  247401 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 01:19:13.334946  247401 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 01:19:13.338144  247401 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	I1212 01:19:13.341352  247401 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 01:19:13.345894  247401 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 01:19:13.349524  247401 config.go:182] Loaded profile config "kubernetes-upgrade-439215": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 01:19:13.349613  247401 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 01:19:13.371414  247401 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 01:19:13.371522  247401 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:19:13.429361  247401 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 01:19:13.419324899 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:19:13.429458  247401 docker.go:319] overlay module found
	I1212 01:19:13.432880  247401 out.go:179] * Using the docker driver based on user configuration
	I1212 01:19:13.435775  247401 start.go:309] selected driver: docker
	I1212 01:19:13.435786  247401 start.go:927] validating driver "docker" against <nil>
	I1212 01:19:13.435797  247401 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 01:19:13.436537  247401 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:19:13.495953  247401 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 01:19:13.486329941 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:19:13.496095  247401 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 01:19:13.496294  247401 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1212 01:19:13.499224  247401 out.go:179] * Using Docker driver with root privileges
	I1212 01:19:13.502035  247401 cni.go:84] Creating CNI manager for ""
	I1212 01:19:13.502087  247401 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 01:19:13.502096  247401 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 01:19:13.502173  247401 start.go:353] cluster config:
	{Name:cert-options-805684 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-options-805684 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.
0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1212 01:19:13.505223  247401 out.go:179] * Starting "cert-options-805684" primary control-plane node in "cert-options-805684" cluster
	I1212 01:19:13.508003  247401 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1212 01:19:13.511126  247401 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 01:19:13.513895  247401 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1212 01:19:13.513933  247401 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1212 01:19:13.513941  247401 cache.go:65] Caching tarball of preloaded images
	I1212 01:19:13.513952  247401 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 01:19:13.514026  247401 preload.go:238] Found /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1212 01:19:13.514034  247401 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1212 01:19:13.514165  247401 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/cert-options-805684/config.json ...
	I1212 01:19:13.514181  247401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/cert-options-805684/config.json: {Name:mk81a32acc75c92ef1f05e3d81b2c4898891c065 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:19:13.533716  247401 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 01:19:13.533727  247401 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 01:19:13.533747  247401 cache.go:243] Successfully downloaded all kic artifacts
	I1212 01:19:13.533776  247401 start.go:360] acquireMachinesLock for cert-options-805684: {Name:mk921c471ca6809ca459156c411bb9aaa08095c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:19:13.533881  247401 start.go:364] duration metric: took 91.389µs to acquireMachinesLock for "cert-options-805684"
	I1212 01:19:13.533905  247401 start.go:93] Provisioning new machine with config: &{Name:cert-options-805684 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-options-805684 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8555 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1212 01:19:13.533973  247401 start.go:125] createHost starting for "" (driver="docker")
	I1212 01:19:13.537439  247401 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1212 01:19:13.537671  247401 start.go:159] libmachine.API.Create for "cert-options-805684" (driver="docker")
	I1212 01:19:13.537700  247401 client.go:173] LocalClient.Create starting
	I1212 01:19:13.537784  247401 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem
	I1212 01:19:13.537821  247401 main.go:143] libmachine: Decoding PEM data...
	I1212 01:19:13.537837  247401 main.go:143] libmachine: Parsing certificate...
	I1212 01:19:13.537889  247401 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem
	I1212 01:19:13.537907  247401 main.go:143] libmachine: Decoding PEM data...
	I1212 01:19:13.537917  247401 main.go:143] libmachine: Parsing certificate...
	I1212 01:19:13.538260  247401 cli_runner.go:164] Run: docker network inspect cert-options-805684 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 01:19:13.554238  247401 cli_runner.go:211] docker network inspect cert-options-805684 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 01:19:13.554304  247401 network_create.go:284] running [docker network inspect cert-options-805684] to gather additional debugging logs...
	I1212 01:19:13.554322  247401 cli_runner.go:164] Run: docker network inspect cert-options-805684
	W1212 01:19:13.568980  247401 cli_runner.go:211] docker network inspect cert-options-805684 returned with exit code 1
	I1212 01:19:13.569005  247401 network_create.go:287] error running [docker network inspect cert-options-805684]: docker network inspect cert-options-805684: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network cert-options-805684 not found
	I1212 01:19:13.569024  247401 network_create.go:289] output of [docker network inspect cert-options-805684]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network cert-options-805684 not found
	
	** /stderr **
	I1212 01:19:13.569169  247401 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 01:19:13.584830  247401 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4cd687b06342 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:a2:e8:c8:87:d3:0a} reservation:<nil>}
	I1212 01:19:13.585096  247401 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-c02c16721c9d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3e:e7:06:63:2c:e9} reservation:<nil>}
	I1212 01:19:13.585354  247401 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-805b07ff58c0 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:be:18:35:7a:03:02} reservation:<nil>}
	I1212 01:19:13.585622  247401 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-a6da0dbcd8e9 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:2a:4c:0b:97:09:33} reservation:<nil>}
	I1212 01:19:13.586013  247401 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019a3cc0}
	I1212 01:19:13.586027  247401 network_create.go:124] attempt to create docker network cert-options-805684 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1212 01:19:13.586080  247401 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-options-805684 cert-options-805684
	I1212 01:19:13.648341  247401 network_create.go:108] docker network cert-options-805684 192.168.85.0/24 created
	I1212 01:19:13.648362  247401 kic.go:121] calculated static IP "192.168.85.2" for the "cert-options-805684" container
	I1212 01:19:13.648446  247401 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 01:19:13.664202  247401 cli_runner.go:164] Run: docker volume create cert-options-805684 --label name.minikube.sigs.k8s.io=cert-options-805684 --label created_by.minikube.sigs.k8s.io=true
	I1212 01:19:13.681341  247401 oci.go:103] Successfully created a docker volume cert-options-805684
	I1212 01:19:13.681435  247401 cli_runner.go:164] Run: docker run --rm --name cert-options-805684-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-options-805684 --entrypoint /usr/bin/test -v cert-options-805684:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1212 01:19:14.230661  247401 oci.go:107] Successfully prepared a docker volume cert-options-805684
	I1212 01:19:14.230730  247401 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1212 01:19:14.230738  247401 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 01:19:14.230805  247401 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v cert-options-805684:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 01:19:18.228164  247401 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v cert-options-805684:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.997312335s)
	I1212 01:19:18.228185  247401 kic.go:203] duration metric: took 3.997443685s to extract preloaded images to volume ...
	W1212 01:19:18.228328  247401 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1212 01:19:18.228430  247401 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 01:19:18.282166  247401 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-options-805684 --name cert-options-805684 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-options-805684 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-options-805684 --network cert-options-805684 --ip 192.168.85.2 --volume cert-options-805684:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8555 --publish=127.0.0.1::8555 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1212 01:19:18.600708  247401 cli_runner.go:164] Run: docker container inspect cert-options-805684 --format={{.State.Running}}
	I1212 01:19:18.624261  247401 cli_runner.go:164] Run: docker container inspect cert-options-805684 --format={{.State.Status}}
	I1212 01:19:18.647612  247401 cli_runner.go:164] Run: docker exec cert-options-805684 stat /var/lib/dpkg/alternatives/iptables
	I1212 01:19:18.698138  247401 oci.go:144] the created container "cert-options-805684" has a running status.
	I1212 01:19:18.698160  247401 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/cert-options-805684/id_rsa...
	I1212 01:19:18.829614  247401 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22101-2343/.minikube/machines/cert-options-805684/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 01:19:18.855032  247401 cli_runner.go:164] Run: docker container inspect cert-options-805684 --format={{.State.Status}}
	I1212 01:19:18.878024  247401 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 01:19:18.878036  247401 kic_runner.go:114] Args: [docker exec --privileged cert-options-805684 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 01:19:18.952852  247401 cli_runner.go:164] Run: docker container inspect cert-options-805684 --format={{.State.Status}}
	I1212 01:19:18.971584  247401 machine.go:94] provisionDockerMachine start ...
	I1212 01:19:18.971678  247401 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-805684
	I1212 01:19:18.991420  247401 main.go:143] libmachine: Using SSH client type: native
	I1212 01:19:18.991749  247401 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1212 01:19:18.991756  247401 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 01:19:18.992457  247401 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 01:19:22.146677  247401 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-options-805684
	
	I1212 01:19:22.146691  247401 ubuntu.go:182] provisioning hostname "cert-options-805684"
	I1212 01:19:22.146753  247401 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-805684
	I1212 01:19:22.164264  247401 main.go:143] libmachine: Using SSH client type: native
	I1212 01:19:22.164561  247401 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1212 01:19:22.164568  247401 main.go:143] libmachine: About to run SSH command:
	sudo hostname cert-options-805684 && echo "cert-options-805684" | sudo tee /etc/hostname
	I1212 01:19:22.326441  247401 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-options-805684
	
	I1212 01:19:22.326521  247401 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-805684
	I1212 01:19:22.352353  247401 main.go:143] libmachine: Using SSH client type: native
	I1212 01:19:22.352664  247401 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1212 01:19:22.352679  247401 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-options-805684' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-options-805684/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-options-805684' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:19:22.503530  247401 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:19:22.503547  247401 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-2343/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-2343/.minikube}
	I1212 01:19:22.503583  247401 ubuntu.go:190] setting up certificates
	I1212 01:19:22.503597  247401 provision.go:84] configureAuth start
	I1212 01:19:22.503656  247401 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-options-805684
	I1212 01:19:22.520393  247401 provision.go:143] copyHostCerts
	I1212 01:19:22.520447  247401 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem, removing ...
	I1212 01:19:22.520454  247401 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem
	I1212 01:19:22.520528  247401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem (1082 bytes)
	I1212 01:19:22.520647  247401 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem, removing ...
	I1212 01:19:22.520652  247401 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem
	I1212 01:19:22.520678  247401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem (1123 bytes)
	I1212 01:19:22.520759  247401 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem, removing ...
	I1212 01:19:22.520763  247401 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem
	I1212 01:19:22.520786  247401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem (1675 bytes)
	I1212 01:19:22.520830  247401 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem org=jenkins.cert-options-805684 san=[127.0.0.1 192.168.85.2 cert-options-805684 localhost minikube]
	I1212 01:19:22.848384  247401 provision.go:177] copyRemoteCerts
	I1212 01:19:22.848441  247401 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:19:22.848478  247401 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-805684
	I1212 01:19:22.867671  247401 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/cert-options-805684/id_rsa Username:docker}
	I1212 01:19:22.974273  247401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 01:19:22.990405  247401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1212 01:19:23.010544  247401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 01:19:23.028362  247401 provision.go:87] duration metric: took 524.744942ms to configureAuth
	I1212 01:19:23.028380  247401 ubuntu.go:206] setting minikube options for container-runtime
	I1212 01:19:23.028579  247401 config.go:182] Loaded profile config "cert-options-805684": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1212 01:19:23.028584  247401 machine.go:97] duration metric: took 4.05699173s to provisionDockerMachine
	I1212 01:19:23.028590  247401 client.go:176] duration metric: took 9.490884952s to LocalClient.Create
	I1212 01:19:23.028618  247401 start.go:167] duration metric: took 9.490941658s to libmachine.API.Create "cert-options-805684"
	I1212 01:19:23.028632  247401 start.go:293] postStartSetup for "cert-options-805684" (driver="docker")
	I1212 01:19:23.028641  247401 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:19:23.028695  247401 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:19:23.028735  247401 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-805684
	I1212 01:19:23.049034  247401 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/cert-options-805684/id_rsa Username:docker}
	I1212 01:19:23.154896  247401 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:19:23.158090  247401 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 01:19:23.158108  247401 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 01:19:23.158118  247401 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/addons for local assets ...
	I1212 01:19:23.158171  247401 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/files for local assets ...
	I1212 01:19:23.158255  247401 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem -> 42902.pem in /etc/ssl/certs
	I1212 01:19:23.158360  247401 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:19:23.165586  247401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /etc/ssl/certs/42902.pem (1708 bytes)
	I1212 01:19:23.181972  247401 start.go:296] duration metric: took 153.327302ms for postStartSetup
	I1212 01:19:23.182328  247401 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-options-805684
	I1212 01:19:23.198696  247401 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/cert-options-805684/config.json ...
	I1212 01:19:23.199019  247401 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 01:19:23.199059  247401 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-805684
	I1212 01:19:23.215469  247401 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/cert-options-805684/id_rsa Username:docker}
	I1212 01:19:23.316048  247401 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 01:19:23.320937  247401 start.go:128] duration metric: took 9.786948033s to createHost
	I1212 01:19:23.320950  247401 start.go:83] releasing machines lock for "cert-options-805684", held for 9.787062791s
	I1212 01:19:23.321032  247401 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-options-805684
	I1212 01:19:23.338436  247401 ssh_runner.go:195] Run: cat /version.json
	I1212 01:19:23.338456  247401 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:19:23.338478  247401 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-805684
	I1212 01:19:23.338504  247401 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-805684
	I1212 01:19:23.361677  247401 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/cert-options-805684/id_rsa Username:docker}
	I1212 01:19:23.376189  247401 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/cert-options-805684/id_rsa Username:docker}
	I1212 01:19:23.553755  247401 ssh_runner.go:195] Run: systemctl --version
	I1212 01:19:23.560111  247401 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:19:23.564205  247401 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:19:23.564282  247401 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:19:23.590089  247401 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1212 01:19:23.590102  247401 start.go:496] detecting cgroup driver to use...
	I1212 01:19:23.590132  247401 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 01:19:23.590185  247401 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 01:19:23.605123  247401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 01:19:23.617775  247401 docker.go:218] disabling cri-docker service (if available) ...
	I1212 01:19:23.617830  247401 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:19:23.634803  247401 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:19:23.652840  247401 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:19:23.772707  247401 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:19:23.923625  247401 docker.go:234] disabling docker service ...
	I1212 01:19:23.923681  247401 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:19:23.944361  247401 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:19:23.957383  247401 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:19:24.079146  247401 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:19:24.205950  247401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:19:24.218382  247401 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:19:24.233106  247401 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 01:19:24.243055  247401 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 01:19:24.252961  247401 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 01:19:24.253019  247401 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 01:19:24.261680  247401 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 01:19:24.270618  247401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 01:19:24.279383  247401 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 01:19:24.288296  247401 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:19:24.296277  247401 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 01:19:24.305099  247401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 01:19:24.313594  247401 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 01:19:24.322741  247401 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:19:24.330167  247401 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:19:24.337641  247401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:19:24.457565  247401 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 01:19:24.616384  247401 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1212 01:19:24.616459  247401 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1212 01:19:24.620222  247401 start.go:564] Will wait 60s for crictl version
	I1212 01:19:24.620275  247401 ssh_runner.go:195] Run: which crictl
	I1212 01:19:24.623780  247401 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 01:19:24.648765  247401 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1212 01:19:24.648824  247401 ssh_runner.go:195] Run: containerd --version
	I1212 01:19:24.669180  247401 ssh_runner.go:195] Run: containerd --version
	I1212 01:19:24.696029  247401 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 2.2.0 ...
	I1212 01:19:24.698937  247401 cli_runner.go:164] Run: docker network inspect cert-options-805684 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 01:19:24.714980  247401 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1212 01:19:24.718719  247401 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:19:24.728133  247401 kubeadm.go:884] updating cluster {Name:cert-options-805684 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-options-805684 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8555 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:19:24.728236  247401 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1212 01:19:24.728302  247401 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:19:24.755646  247401 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 01:19:24.755659  247401 containerd.go:534] Images already preloaded, skipping extraction
	I1212 01:19:24.755725  247401 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:19:24.780257  247401 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 01:19:24.780268  247401 cache_images.go:86] Images are preloaded, skipping loading
	I1212 01:19:24.780274  247401 kubeadm.go:935] updating node { 192.168.85.2 8555 v1.34.2 containerd true true} ...
	I1212 01:19:24.780369  247401 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=cert-options-805684 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:cert-options-805684 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:19:24.780443  247401 ssh_runner.go:195] Run: sudo crictl info
	I1212 01:19:24.804933  247401 cni.go:84] Creating CNI manager for ""
	I1212 01:19:24.804943  247401 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 01:19:24.804959  247401 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 01:19:24.804985  247401 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8555 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-options-805684 NodeName:cert-options-805684 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 01:19:24.805099  247401 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8555
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "cert-options-805684"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8555
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:19:24.805162  247401 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 01:19:24.812810  247401 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 01:19:24.812868  247401 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:19:24.820293  247401 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1212 01:19:24.832725  247401 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 01:19:24.845545  247401 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I1212 01:19:24.858875  247401 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1212 01:19:24.862282  247401 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:19:24.871719  247401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:19:24.988676  247401 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:19:25.006276  247401 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/cert-options-805684 for IP: 192.168.85.2
	I1212 01:19:25.006290  247401 certs.go:195] generating shared ca certs ...
	I1212 01:19:25.006323  247401 certs.go:227] acquiring lock for ca certs: {Name:mk18ed2fce74cbc4ee01c0f71e2dbdd98ccce1cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:19:25.006535  247401 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key
	I1212 01:19:25.006588  247401 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key
	I1212 01:19:25.006605  247401 certs.go:257] generating profile certs ...
	I1212 01:19:25.006683  247401 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/cert-options-805684/client.key
	I1212 01:19:25.006701  247401 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/cert-options-805684/client.crt with IP's: []
	I1212 01:19:25.145142  247401 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/cert-options-805684/client.crt ...
	I1212 01:19:25.145158  247401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/cert-options-805684/client.crt: {Name:mk2b11b1003d7efc2fe30aed3ef09fc7ee4882d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:19:25.145366  247401 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/cert-options-805684/client.key ...
	I1212 01:19:25.145373  247401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/cert-options-805684/client.key: {Name:mkcfb0c95ac278cf1293e68593d0119626175056 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:19:25.145473  247401 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/cert-options-805684/apiserver.key.c4895f56
	I1212 01:19:25.145487  247401 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/cert-options-805684/apiserver.crt.c4895f56 with IP's: [127.0.0.1 192.168.15.15 10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1212 01:19:25.638796  247401 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/cert-options-805684/apiserver.crt.c4895f56 ...
	I1212 01:19:25.638814  247401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/cert-options-805684/apiserver.crt.c4895f56: {Name:mk72c2d12308c92c0acc2e169d9e2971a8c5a9e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:19:25.639029  247401 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/cert-options-805684/apiserver.key.c4895f56 ...
	I1212 01:19:25.639037  247401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/cert-options-805684/apiserver.key.c4895f56: {Name:mkb1bd64ba54a498065d6602baae665b8cf8eb61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:19:25.639123  247401 certs.go:382] copying /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/cert-options-805684/apiserver.crt.c4895f56 -> /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/cert-options-805684/apiserver.crt
	I1212 01:19:25.639204  247401 certs.go:386] copying /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/cert-options-805684/apiserver.key.c4895f56 -> /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/cert-options-805684/apiserver.key
	I1212 01:19:25.639257  247401 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/cert-options-805684/proxy-client.key
	I1212 01:19:25.639270  247401 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/cert-options-805684/proxy-client.crt with IP's: []
	I1212 01:19:26.121869  247401 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/cert-options-805684/proxy-client.crt ...
	I1212 01:19:26.121889  247401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/cert-options-805684/proxy-client.crt: {Name:mk372d1620ba689a29fd4191e8ff6e8e81fefd44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:19:26.122112  247401 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/cert-options-805684/proxy-client.key ...
	I1212 01:19:26.122122  247401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/cert-options-805684/proxy-client.key: {Name:mkbe26ea4e2b03c2da968ee7152b983773b71ee5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:19:26.122318  247401 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem (1338 bytes)
	W1212 01:19:26.122359  247401 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290_empty.pem, impossibly tiny 0 bytes
	I1212 01:19:26.122366  247401 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 01:19:26.122392  247401 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem (1082 bytes)
	I1212 01:19:26.122417  247401 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:19:26.122440  247401 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem (1675 bytes)
	I1212 01:19:26.122485  247401 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem (1708 bytes)
	I1212 01:19:26.123077  247401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:19:26.143167  247401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 01:19:26.161246  247401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:19:26.178909  247401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:19:26.197167  247401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/cert-options-805684/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1480 bytes)
	I1212 01:19:26.214903  247401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/cert-options-805684/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 01:19:26.232372  247401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/cert-options-805684/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:19:26.250301  247401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/cert-options-805684/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 01:19:26.267244  247401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /usr/share/ca-certificates/42902.pem (1708 bytes)
	I1212 01:19:26.284400  247401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:19:26.301774  247401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem --> /usr/share/ca-certificates/4290.pem (1338 bytes)
	I1212 01:19:26.318632  247401 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:19:26.330784  247401 ssh_runner.go:195] Run: openssl version
	I1212 01:19:26.337072  247401 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42902.pem
	I1212 01:19:26.344249  247401 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42902.pem /etc/ssl/certs/42902.pem
	I1212 01:19:26.351526  247401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42902.pem
	I1212 01:19:26.355014  247401 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:06 /usr/share/ca-certificates/42902.pem
	I1212 01:19:26.355071  247401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42902.pem
	I1212 01:19:26.395718  247401 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 01:19:26.402723  247401 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/42902.pem /etc/ssl/certs/3ec20f2e.0
	I1212 01:19:26.409728  247401 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:19:26.417262  247401 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 01:19:26.424519  247401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:19:26.428034  247401 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:19:26.428087  247401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:19:26.468477  247401 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 01:19:26.475691  247401 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 01:19:26.482924  247401 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4290.pem
	I1212 01:19:26.490033  247401 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4290.pem /etc/ssl/certs/4290.pem
	I1212 01:19:26.497359  247401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4290.pem
	I1212 01:19:26.500891  247401 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:06 /usr/share/ca-certificates/4290.pem
	I1212 01:19:26.500967  247401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4290.pem
	I1212 01:19:26.543784  247401 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 01:19:26.551222  247401 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4290.pem /etc/ssl/certs/51391683.0
	I1212 01:19:26.558208  247401 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:19:26.569234  247401 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 01:19:26.569279  247401 kubeadm.go:401] StartCluster: {Name:cert-options-805684 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-options-805684 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8555 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:19:26.569356  247401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1212 01:19:26.569435  247401 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:19:26.597243  247401 cri.go:89] found id: ""
	I1212 01:19:26.597302  247401 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:19:26.604950  247401 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:19:26.612573  247401 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 01:19:26.612637  247401 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:19:26.620269  247401 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:19:26.620281  247401 kubeadm.go:158] found existing configuration files:
	
	I1212 01:19:26.620340  247401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/admin.conf
	I1212 01:19:26.627725  247401 kubeadm.go:164] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:19:26.627786  247401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:19:26.634715  247401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/kubelet.conf
	I1212 01:19:26.642309  247401 kubeadm.go:164] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:19:26.642363  247401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:19:26.649544  247401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/controller-manager.conf
	I1212 01:19:26.656945  247401 kubeadm.go:164] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:19:26.657007  247401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:19:26.664186  247401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/scheduler.conf
	I1212 01:19:26.671703  247401 kubeadm.go:164] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:19:26.671779  247401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:19:26.678867  247401 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 01:19:26.718428  247401 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1212 01:19:26.718478  247401 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 01:19:26.742464  247401 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 01:19:26.742531  247401 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 01:19:26.742565  247401 kubeadm.go:319] OS: Linux
	I1212 01:19:26.742609  247401 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 01:19:26.742656  247401 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 01:19:26.742701  247401 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 01:19:26.742748  247401 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 01:19:26.742794  247401 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 01:19:26.742844  247401 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 01:19:26.742888  247401 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 01:19:26.742937  247401 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 01:19:26.742982  247401 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 01:19:26.814268  247401 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:19:26.814370  247401 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:19:26.814460  247401 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:19:26.821524  247401 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:19:26.828097  247401 out.go:252]   - Generating certificates and keys ...
	I1212 01:19:26.828189  247401 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 01:19:26.828252  247401 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 01:19:27.011695  247401 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 01:19:27.233830  247401 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 01:19:27.297610  247401 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 01:19:27.694227  247401 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 01:19:31.736902  203848 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000799957s
	I1212 01:19:31.736934  203848 kubeadm.go:319] 
	I1212 01:19:31.736992  203848 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 01:19:31.737026  203848 kubeadm.go:319] 	- The kubelet is not running
	I1212 01:19:31.737130  203848 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 01:19:31.737136  203848 kubeadm.go:319] 
	I1212 01:19:31.737241  203848 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 01:19:31.737274  203848 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 01:19:31.737304  203848 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 01:19:31.737309  203848 kubeadm.go:319] 
	I1212 01:19:31.740266  203848 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 01:19:31.740698  203848 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 01:19:31.740807  203848 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:19:31.741069  203848 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 01:19:31.741075  203848 kubeadm.go:319] 
	I1212 01:19:31.741143  203848 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 01:19:31.741197  203848 kubeadm.go:403] duration metric: took 12m18.808665943s to StartCluster
	I1212 01:19:31.741229  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:19:31.741295  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:19:31.776138  203848 cri.go:89] found id: ""
	I1212 01:19:31.776160  203848 logs.go:282] 0 containers: []
	W1212 01:19:31.776168  203848 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:19:31.776174  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:19:31.776234  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:19:31.826147  203848 cri.go:89] found id: ""
	I1212 01:19:31.826169  203848 logs.go:282] 0 containers: []
	W1212 01:19:31.826177  203848 logs.go:284] No container was found matching "etcd"
	I1212 01:19:31.826183  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:19:31.826246  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:19:31.878291  203848 cri.go:89] found id: ""
	I1212 01:19:31.878313  203848 logs.go:282] 0 containers: []
	W1212 01:19:31.878322  203848 logs.go:284] No container was found matching "coredns"
	I1212 01:19:31.878329  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:19:31.878390  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:19:31.910681  203848 cri.go:89] found id: ""
	I1212 01:19:31.910703  203848 logs.go:282] 0 containers: []
	W1212 01:19:31.910711  203848 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:19:31.910717  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:19:31.910772  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:19:31.945031  203848 cri.go:89] found id: ""
	I1212 01:19:31.945055  203848 logs.go:282] 0 containers: []
	W1212 01:19:31.945065  203848 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:19:31.945071  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:19:31.945129  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:19:31.981346  203848 cri.go:89] found id: ""
	I1212 01:19:31.981412  203848 logs.go:282] 0 containers: []
	W1212 01:19:31.981435  203848 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:19:31.981455  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:19:31.981541  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:19:32.017546  203848 cri.go:89] found id: ""
	I1212 01:19:32.017570  203848 logs.go:282] 0 containers: []
	W1212 01:19:32.017579  203848 logs.go:284] No container was found matching "kindnet"
	I1212 01:19:32.017604  203848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1212 01:19:32.017683  203848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 01:19:32.065393  203848 cri.go:89] found id: ""
	I1212 01:19:32.065469  203848 logs.go:282] 0 containers: []
	W1212 01:19:32.065492  203848 logs.go:284] No container was found matching "storage-provisioner"
	I1212 01:19:32.065516  203848 logs.go:123] Gathering logs for container status ...
	I1212 01:19:32.065556  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:19:32.145353  203848 logs.go:123] Gathering logs for kubelet ...
	I1212 01:19:32.145418  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:19:32.209552  203848 logs.go:123] Gathering logs for dmesg ...
	I1212 01:19:32.209584  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:19:32.225719  203848 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:19:32.225743  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:19:32.309077  203848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:19:32.309147  203848 logs.go:123] Gathering logs for containerd ...
	I1212 01:19:32.309174  203848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1212 01:19:32.356920  203848 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000799957s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 01:19:32.357024  203848 out.go:285] * 
	W1212 01:19:32.357223  203848 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000799957s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 01:19:32.357279  203848 out.go:285] * 
	W1212 01:19:32.359555  203848 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 01:19:32.367165  203848 out.go:203] 
	W1212 01:19:32.371117  203848 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000799957s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 01:19:32.371345  203848 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 01:19:32.371406  203848 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 01:19:32.374965  203848 out.go:203] 
	I1212 01:19:28.688922  247401 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 01:19:28.689096  247401 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [cert-options-805684 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1212 01:19:28.874955  247401 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 01:19:28.875104  247401 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [cert-options-805684 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1212 01:19:28.975173  247401 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 01:19:29.329474  247401 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 01:19:29.986510  247401 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 01:19:29.986742  247401 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:19:30.647173  247401 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:19:30.892801  247401 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 01:19:31.255486  247401 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:19:32.136484  247401 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:19:32.539639  247401 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:19:32.547421  247401 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:19:32.550466  247401 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:19:32.553941  247401 out.go:252]   - Booting up control plane ...
	I1212 01:19:32.554065  247401 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:19:32.554167  247401 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:19:32.555078  247401 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:19:32.608811  247401 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:19:32.608912  247401 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 01:19:32.627496  247401 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 01:19:32.627588  247401 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:19:32.627627  247401 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 01:19:32.843332  247401 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 01:19:32.843445  247401 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 12 01:11:24 kubernetes-upgrade-439215 containerd[555]: time="2025-12-12T01:11:24.721725176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:11:24 kubernetes-upgrade-439215 containerd[555]: time="2025-12-12T01:11:24.723143484Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"21168808\" in 1.669107866s"
	Dec 12 01:11:24 kubernetes-upgrade-439215 containerd[555]: time="2025-12-12T01:11:24.723283867Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\""
	Dec 12 01:11:24 kubernetes-upgrade-439215 containerd[555]: time="2025-12-12T01:11:24.724124201Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\""
	Dec 12 01:11:25 kubernetes-upgrade-439215 containerd[555]: time="2025-12-12T01:11:25.387376889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
	Dec 12 01:11:25 kubernetes-upgrade-439215 containerd[555]: time="2025-12-12T01:11:25.389242232Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709"
	Dec 12 01:11:25 kubernetes-upgrade-439215 containerd[555]: time="2025-12-12T01:11:25.391575541Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
	Dec 12 01:11:25 kubernetes-upgrade-439215 containerd[555]: time="2025-12-12T01:11:25.395042061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
	Dec 12 01:11:25 kubernetes-upgrade-439215 containerd[555]: time="2025-12-12T01:11:25.395841311Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 671.537196ms"
	Dec 12 01:11:25 kubernetes-upgrade-439215 containerd[555]: time="2025-12-12T01:11:25.395887006Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\""
	Dec 12 01:11:25 kubernetes-upgrade-439215 containerd[555]: time="2025-12-12T01:11:25.397105862Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\""
	Dec 12 01:11:27 kubernetes-upgrade-439215 containerd[555]: time="2025-12-12T01:11:27.107658844Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:11:27 kubernetes-upgrade-439215 containerd[555]: time="2025-12-12T01:11:27.109868149Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=21140371"
	Dec 12 01:11:27 kubernetes-upgrade-439215 containerd[555]: time="2025-12-12T01:11:27.112273015Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:11:27 kubernetes-upgrade-439215 containerd[555]: time="2025-12-12T01:11:27.117107332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:11:27 kubernetes-upgrade-439215 containerd[555]: time="2025-12-12T01:11:27.118494574Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"21136588\" in 1.721348491s"
	Dec 12 01:11:27 kubernetes-upgrade-439215 containerd[555]: time="2025-12-12T01:11:27.118544782Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\""
	Dec 12 01:16:16 kubernetes-upgrade-439215 containerd[555]: time="2025-12-12T01:16:16.595481418Z" level=info msg="container event discarded" container=b012daae2df75e71bb35934f1d4d6228ef1754de71b8aa6bd4e15e06d397fd74 type=CONTAINER_DELETED_EVENT
	Dec 12 01:16:16 kubernetes-upgrade-439215 containerd[555]: time="2025-12-12T01:16:16.611226325Z" level=info msg="container event discarded" container=f8d4b40276857025f1e1b92d79f94f9b9d5ebd9583f77a2c4908f7614cddaafa type=CONTAINER_DELETED_EVENT
	Dec 12 01:16:16 kubernetes-upgrade-439215 containerd[555]: time="2025-12-12T01:16:16.625491154Z" level=info msg="container event discarded" container=926c31290f57da44d74e28505a0f7a7da1c5c784d93867c9ed480240245314b1 type=CONTAINER_DELETED_EVENT
	Dec 12 01:16:16 kubernetes-upgrade-439215 containerd[555]: time="2025-12-12T01:16:16.625551150Z" level=info msg="container event discarded" container=ce32a95eab4999da0319c3ed06383f4cc99d8a30be896ca16813828a65545ab3 type=CONTAINER_DELETED_EVENT
	Dec 12 01:16:16 kubernetes-upgrade-439215 containerd[555]: time="2025-12-12T01:16:16.668867863Z" level=info msg="container event discarded" container=8508ebbba553cc3f9dff2c422d5f9238692e37256f0cc4cb649e807364b869b3 type=CONTAINER_DELETED_EVENT
	Dec 12 01:16:16 kubernetes-upgrade-439215 containerd[555]: time="2025-12-12T01:16:16.668941177Z" level=info msg="container event discarded" container=64ad56abeb74f10b8641e05570c8fefd6ccc11bebf282a3d364ead07acbb0268 type=CONTAINER_DELETED_EVENT
	Dec 12 01:16:16 kubernetes-upgrade-439215 containerd[555]: time="2025-12-12T01:16:16.686195824Z" level=info msg="container event discarded" container=3831a03e0095367c196efae076b6dbd0611f3a70935bbdb0a8f665fd909915e3 type=CONTAINER_DELETED_EVENT
	Dec 12 01:16:16 kubernetes-upgrade-439215 containerd[555]: time="2025-12-12T01:16:16.686261974Z" level=info msg="container event discarded" container=82393de69b852f0a19de05bdeae1b38f21f668e44409282e0b078fa7833e49e5 type=CONTAINER_DELETED_EVENT
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec11 23:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014465] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.504479] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.038126] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.726220] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.947343] kauditd_printk_skb: 36 callbacks suppressed
	[Dec12 00:40] hrtimer: interrupt took 11339963 ns
	
	
	==> kernel <==
	 01:19:34 up  2:02,  0 user,  load average: 0.63, 1.51, 1.91
	Linux kubernetes-upgrade-439215 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 01:19:31 kubernetes-upgrade-439215 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:19:32 kubernetes-upgrade-439215 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 12 01:19:32 kubernetes-upgrade-439215 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:19:32 kubernetes-upgrade-439215 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:19:32 kubernetes-upgrade-439215 kubelet[14383]: E1212 01:19:32.115421   14383 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:19:32 kubernetes-upgrade-439215 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:19:32 kubernetes-upgrade-439215 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:19:32 kubernetes-upgrade-439215 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 12 01:19:32 kubernetes-upgrade-439215 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:19:32 kubernetes-upgrade-439215 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:19:32 kubernetes-upgrade-439215 kubelet[14414]: E1212 01:19:32.915814   14414 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:19:32 kubernetes-upgrade-439215 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:19:32 kubernetes-upgrade-439215 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:19:33 kubernetes-upgrade-439215 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 12 01:19:33 kubernetes-upgrade-439215 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:19:33 kubernetes-upgrade-439215 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:19:33 kubernetes-upgrade-439215 kubelet[14420]: E1212 01:19:33.627649   14420 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:19:33 kubernetes-upgrade-439215 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:19:33 kubernetes-upgrade-439215 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:19:34 kubernetes-upgrade-439215 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 12 01:19:34 kubernetes-upgrade-439215 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:19:34 kubernetes-upgrade-439215 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:19:34 kubernetes-upgrade-439215 kubelet[14447]: E1212 01:19:34.357916   14447 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:19:34 kubernetes-upgrade-439215 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:19:34 kubernetes-upgrade-439215 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-439215 -n kubernetes-upgrade-439215
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-439215 -n kubernetes-upgrade-439215: exit status 2 (511.549998ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "kubernetes-upgrade-439215" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "kubernetes-upgrade-439215" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-439215
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-439215: (2.718639633s)
--- FAIL: TestKubernetesUpgrade (805.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (512.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-361053 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p no-preload-361053 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m30.736944486s)

                                                
                                                
-- stdout --
	* [no-preload-361053] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22101
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "no-preload-361053" primary control-plane node in "no-preload-361053" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 01:22:52.032846  268396 out.go:360] Setting OutFile to fd 1 ...
	I1212 01:22:52.033017  268396 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:22:52.033030  268396 out.go:374] Setting ErrFile to fd 2...
	I1212 01:22:52.033035  268396 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:22:52.033309  268396 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	I1212 01:22:52.033749  268396 out.go:368] Setting JSON to false
	I1212 01:22:52.034711  268396 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7518,"bootTime":1765495054,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1212 01:22:52.034777  268396 start.go:143] virtualization:  
	I1212 01:22:52.040455  268396 out.go:179] * [no-preload-361053] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 01:22:52.043686  268396 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 01:22:52.043788  268396 notify.go:221] Checking for updates...
	I1212 01:22:52.050622  268396 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 01:22:52.053614  268396 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 01:22:52.056514  268396 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	I1212 01:22:52.059521  268396 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 01:22:52.062558  268396 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 01:22:52.065964  268396 config.go:182] Loaded profile config "embed-certs-648696": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1212 01:22:52.066091  268396 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 01:22:52.098912  268396 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 01:22:52.099068  268396 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:22:52.181422  268396 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 01:22:52.168556719 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:22:52.181549  268396 docker.go:319] overlay module found
	I1212 01:22:52.184586  268396 out.go:179] * Using the docker driver based on user configuration
	I1212 01:22:52.187322  268396 start.go:309] selected driver: docker
	I1212 01:22:52.187343  268396 start.go:927] validating driver "docker" against <nil>
	I1212 01:22:52.187358  268396 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 01:22:52.188123  268396 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:22:52.282528  268396 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 01:22:52.27010489 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:22:52.282710  268396 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 01:22:52.282939  268396 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 01:22:52.285936  268396 out.go:179] * Using Docker driver with root privileges
	I1212 01:22:52.288739  268396 cni.go:84] Creating CNI manager for ""
	I1212 01:22:52.288802  268396 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 01:22:52.288815  268396 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 01:22:52.288904  268396 start.go:353] cluster config:
	{Name:no-preload-361053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-361053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:22:52.291950  268396 out.go:179] * Starting "no-preload-361053" primary control-plane node in "no-preload-361053" cluster
	I1212 01:22:52.294752  268396 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1212 01:22:52.297691  268396 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 01:22:52.300413  268396 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 01:22:52.300496  268396 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 01:22:52.300558  268396 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/config.json ...
	I1212 01:22:52.300595  268396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/config.json: {Name:mkfbcf07d31374e825a6e36b22ac54c96d9f156a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:22:52.300856  268396 cache.go:107] acquiring lock: {Name:mk71cce41032f52f0748ef343d21f16410e3a1fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:22:52.300918  268396 cache.go:115] /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1212 01:22:52.300930  268396 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 84.759µs
	I1212 01:22:52.300943  268396 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1212 01:22:52.300958  268396 cache.go:107] acquiring lock: {Name:mk86e2a34ccf063d967d1b885c7693629a6b1892 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:22:52.301022  268396 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 01:22:52.301196  268396 cache.go:107] acquiring lock: {Name:mk5046428d0406b9fe0bac2e28c1f5cc3958499f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:22:52.301266  268396 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 01:22:52.301370  268396 cache.go:107] acquiring lock: {Name:mkc4887793edcc3c6296024b677e69f6ec1f79f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:22:52.301432  268396 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 01:22:52.301523  268396 cache.go:107] acquiring lock: {Name:mkeb49560acf33aa79e308e0b71177927ef617d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:22:52.301580  268396 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 01:22:52.301700  268396 cache.go:107] acquiring lock: {Name:mk2f0a11f2d527d62eb30e98e76f3a359773886b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:22:52.301739  268396 cache.go:115] /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1212 01:22:52.301747  268396 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 49.469µs
	I1212 01:22:52.301754  268396 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1212 01:22:52.301764  268396 cache.go:107] acquiring lock: {Name:mkf75c8f281a4d7578645f330ed9cc6bf48ab550 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:22:52.301796  268396 cache.go:115] /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1212 01:22:52.301806  268396 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 43.143µs
	I1212 01:22:52.301812  268396 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1212 01:22:52.301826  268396 cache.go:107] acquiring lock: {Name:mk1d6384b2d8bd32efb0f4661eaa55ecd74d4b80 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:22:52.301886  268396 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1212 01:22:52.304734  268396 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 01:22:52.305208  268396 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 01:22:52.305661  268396 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1212 01:22:52.306109  268396 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 01:22:52.306500  268396 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 01:22:52.324437  268396 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 01:22:52.324459  268396 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 01:22:52.324485  268396 cache.go:243] Successfully downloaded all kic artifacts
	I1212 01:22:52.324516  268396 start.go:360] acquireMachinesLock for no-preload-361053: {Name:mk154c67822339b116aad3ea851214e3043755e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:22:52.324658  268396 start.go:364] duration metric: took 122.159µs to acquireMachinesLock for "no-preload-361053"
	I1212 01:22:52.324692  268396 start.go:93] Provisioning new machine with config: &{Name:no-preload-361053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-361053 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1212 01:22:52.324764  268396 start.go:125] createHost starting for "" (driver="docker")
	I1212 01:22:52.329979  268396 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1212 01:22:52.330404  268396 start.go:159] libmachine.API.Create for "no-preload-361053" (driver="docker")
	I1212 01:22:52.330478  268396 client.go:173] LocalClient.Create starting
	I1212 01:22:52.330585  268396 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem
	I1212 01:22:52.330657  268396 main.go:143] libmachine: Decoding PEM data...
	I1212 01:22:52.330678  268396 main.go:143] libmachine: Parsing certificate...
	I1212 01:22:52.330795  268396 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem
	I1212 01:22:52.330822  268396 main.go:143] libmachine: Decoding PEM data...
	I1212 01:22:52.330835  268396 main.go:143] libmachine: Parsing certificate...
	I1212 01:22:52.331407  268396 cli_runner.go:164] Run: docker network inspect no-preload-361053 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 01:22:52.357033  268396 cli_runner.go:211] docker network inspect no-preload-361053 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 01:22:52.357127  268396 network_create.go:284] running [docker network inspect no-preload-361053] to gather additional debugging logs...
	I1212 01:22:52.357144  268396 cli_runner.go:164] Run: docker network inspect no-preload-361053
	W1212 01:22:52.374684  268396 cli_runner.go:211] docker network inspect no-preload-361053 returned with exit code 1
	I1212 01:22:52.374711  268396 network_create.go:287] error running [docker network inspect no-preload-361053]: docker network inspect no-preload-361053: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-361053 not found
	I1212 01:22:52.374724  268396 network_create.go:289] output of [docker network inspect no-preload-361053]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-361053 not found
	
	** /stderr **
	I1212 01:22:52.374815  268396 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 01:22:52.405595  268396 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4cd687b06342 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:a2:e8:c8:87:d3:0a} reservation:<nil>}
	I1212 01:22:52.405900  268396 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-c02c16721c9d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3e:e7:06:63:2c:e9} reservation:<nil>}
	I1212 01:22:52.406200  268396 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-805b07ff58c0 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:be:18:35:7a:03:02} reservation:<nil>}
	I1212 01:22:52.406448  268396 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-7515af99bb2e IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:9e:ad:2b:8a:78:5c} reservation:<nil>}
	I1212 01:22:52.406829  268396 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001bb4ba0}
	I1212 01:22:52.406853  268396 network_create.go:124] attempt to create docker network no-preload-361053 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1212 01:22:52.406917  268396 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-361053 no-preload-361053
	I1212 01:22:52.499966  268396 network_create.go:108] docker network no-preload-361053 192.168.85.0/24 created
	I1212 01:22:52.500044  268396 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-361053" container
	I1212 01:22:52.500149  268396 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 01:22:52.526713  268396 cli_runner.go:164] Run: docker volume create no-preload-361053 --label name.minikube.sigs.k8s.io=no-preload-361053 --label created_by.minikube.sigs.k8s.io=true
	I1212 01:22:52.555181  268396 oci.go:103] Successfully created a docker volume no-preload-361053
	I1212 01:22:52.555275  268396 cli_runner.go:164] Run: docker run --rm --name no-preload-361053-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-361053 --entrypoint /usr/bin/test -v no-preload-361053:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1212 01:22:52.642485  268396 cache.go:162] opening:  /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1212 01:22:52.661436  268396 cache.go:162] opening:  /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1212 01:22:52.668826  268396 cache.go:162] opening:  /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1212 01:22:52.717527  268396 cache.go:162] opening:  /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1212 01:22:52.820840  268396 cache.go:162] opening:  /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1212 01:22:53.361116  268396 cache.go:157] /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1212 01:22:53.361142  268396 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 1.059619327s
	I1212 01:22:53.361155  268396 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1212 01:22:53.416237  268396 oci.go:107] Successfully prepared a docker volume no-preload-361053
	I1212 01:22:53.416296  268396 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	W1212 01:22:53.416582  268396 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1212 01:22:53.416699  268396 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 01:22:53.536526  268396 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-361053 --name no-preload-361053 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-361053 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-361053 --network no-preload-361053 --ip 192.168.85.2 --volume no-preload-361053:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1212 01:22:53.931398  268396 cache.go:157] /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1212 01:22:53.936954  268396 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 1.635123052s
	I1212 01:22:53.936983  268396 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1212 01:22:53.963068  268396 cache.go:157] /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1212 01:22:53.963198  268396 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 1.661827602s
	I1212 01:22:53.963219  268396 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1212 01:22:53.986384  268396 cache.go:157] /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1212 01:22:53.986413  268396 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 1.685454487s
	I1212 01:22:53.986425  268396 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1212 01:22:54.007371  268396 cache.go:157] /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1212 01:22:54.007401  268396 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 1.706204946s
	I1212 01:22:54.007422  268396 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1212 01:22:54.007436  268396 cache.go:87] Successfully saved all images to host disk.
	I1212 01:22:54.186619  268396 cli_runner.go:164] Run: docker container inspect no-preload-361053 --format={{.State.Running}}
	I1212 01:22:54.220201  268396 cli_runner.go:164] Run: docker container inspect no-preload-361053 --format={{.State.Status}}
	I1212 01:22:54.247718  268396 cli_runner.go:164] Run: docker exec no-preload-361053 stat /var/lib/dpkg/alternatives/iptables
	I1212 01:22:54.320543  268396 oci.go:144] the created container "no-preload-361053" has a running status.
	I1212 01:22:54.320570  268396 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/no-preload-361053/id_rsa...
	I1212 01:22:55.260354  268396 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22101-2343/.minikube/machines/no-preload-361053/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 01:22:55.288132  268396 cli_runner.go:164] Run: docker container inspect no-preload-361053 --format={{.State.Status}}
	I1212 01:22:55.313764  268396 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 01:22:55.313783  268396 kic_runner.go:114] Args: [docker exec --privileged no-preload-361053 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 01:22:55.397539  268396 cli_runner.go:164] Run: docker container inspect no-preload-361053 --format={{.State.Status}}
	I1212 01:22:55.430106  268396 machine.go:94] provisionDockerMachine start ...
	I1212 01:22:55.430195  268396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-361053
	I1212 01:22:55.453369  268396 main.go:143] libmachine: Using SSH client type: native
	I1212 01:22:55.453697  268396 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1212 01:22:55.453706  268396 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 01:22:55.454333  268396 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 01:22:58.606496  268396 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-361053
	
	I1212 01:22:58.606523  268396 ubuntu.go:182] provisioning hostname "no-preload-361053"
	I1212 01:22:58.606640  268396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-361053
	I1212 01:22:58.624599  268396 main.go:143] libmachine: Using SSH client type: native
	I1212 01:22:58.624905  268396 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1212 01:22:58.624921  268396 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-361053 && echo "no-preload-361053" | sudo tee /etc/hostname
	I1212 01:22:58.788938  268396 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-361053
	
	I1212 01:22:58.789081  268396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-361053
	I1212 01:22:58.806769  268396 main.go:143] libmachine: Using SSH client type: native
	I1212 01:22:58.807259  268396 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1212 01:22:58.807284  268396 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-361053' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-361053/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-361053' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:22:58.963638  268396 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:22:58.963672  268396 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-2343/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-2343/.minikube}
	I1212 01:22:58.963709  268396 ubuntu.go:190] setting up certificates
	I1212 01:22:58.963726  268396 provision.go:84] configureAuth start
	I1212 01:22:58.963801  268396 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-361053
	I1212 01:22:58.981749  268396 provision.go:143] copyHostCerts
	I1212 01:22:58.981820  268396 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem, removing ...
	I1212 01:22:58.981839  268396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem
	I1212 01:22:58.981915  268396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem (1082 bytes)
	I1212 01:22:58.982010  268396 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem, removing ...
	I1212 01:22:58.982020  268396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem
	I1212 01:22:58.982047  268396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem (1123 bytes)
	I1212 01:22:58.982115  268396 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem, removing ...
	I1212 01:22:58.982124  268396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem
	I1212 01:22:58.982151  268396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem (1675 bytes)
	I1212 01:22:58.982201  268396 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem org=jenkins.no-preload-361053 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-361053]
	I1212 01:22:59.411857  268396 provision.go:177] copyRemoteCerts
	I1212 01:22:59.411926  268396 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:22:59.411975  268396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-361053
	I1212 01:22:59.429776  268396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/no-preload-361053/id_rsa Username:docker}
	I1212 01:22:59.534811  268396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 01:22:59.552664  268396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 01:22:59.571161  268396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 01:22:59.589049  268396 provision.go:87] duration metric: took 625.300072ms to configureAuth
	I1212 01:22:59.589118  268396 ubuntu.go:206] setting minikube options for container-runtime
	I1212 01:22:59.589320  268396 config.go:182] Loaded profile config "no-preload-361053": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 01:22:59.589336  268396 machine.go:97] duration metric: took 4.15921266s to provisionDockerMachine
	I1212 01:22:59.589344  268396 client.go:176] duration metric: took 7.258856783s to LocalClient.Create
	I1212 01:22:59.589373  268396 start.go:167] duration metric: took 7.258971738s to libmachine.API.Create "no-preload-361053"
	I1212 01:22:59.589385  268396 start.go:293] postStartSetup for "no-preload-361053" (driver="docker")
	I1212 01:22:59.589397  268396 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:22:59.589471  268396 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:22:59.589516  268396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-361053
	I1212 01:22:59.606935  268396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/no-preload-361053/id_rsa Username:docker}
	I1212 01:22:59.711320  268396 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:22:59.714678  268396 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 01:22:59.714704  268396 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 01:22:59.714716  268396 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/addons for local assets ...
	I1212 01:22:59.714779  268396 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/files for local assets ...
	I1212 01:22:59.714866  268396 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem -> 42902.pem in /etc/ssl/certs
	I1212 01:22:59.714973  268396 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:22:59.722945  268396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /etc/ssl/certs/42902.pem (1708 bytes)
	I1212 01:22:59.748596  268396 start.go:296] duration metric: took 159.196403ms for postStartSetup
	I1212 01:22:59.748995  268396 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-361053
	I1212 01:22:59.768473  268396 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/config.json ...
	I1212 01:22:59.768764  268396 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 01:22:59.768815  268396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-361053
	I1212 01:22:59.787144  268396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/no-preload-361053/id_rsa Username:docker}
	I1212 01:22:59.888339  268396 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 01:22:59.893531  268396 start.go:128] duration metric: took 7.568753073s to createHost
	I1212 01:22:59.893555  268396 start.go:83] releasing machines lock for "no-preload-361053", held for 7.56888142s
	I1212 01:22:59.893624  268396 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-361053
	I1212 01:22:59.910344  268396 ssh_runner.go:195] Run: cat /version.json
	I1212 01:22:59.910402  268396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-361053
	I1212 01:22:59.910635  268396 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:22:59.910703  268396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-361053
	I1212 01:22:59.928072  268396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/no-preload-361053/id_rsa Username:docker}
	I1212 01:22:59.947152  268396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/no-preload-361053/id_rsa Username:docker}
	I1212 01:23:00.073172  268396 ssh_runner.go:195] Run: systemctl --version
	I1212 01:23:00.296140  268396 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:23:00.323187  268396 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:23:00.323328  268396 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:23:00.372604  268396 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1212 01:23:00.372631  268396 start.go:496] detecting cgroup driver to use...
	I1212 01:23:00.372673  268396 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 01:23:00.372737  268396 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 01:23:00.393816  268396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 01:23:00.414662  268396 docker.go:218] disabling cri-docker service (if available) ...
	I1212 01:23:00.414941  268396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:23:00.441234  268396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:23:00.462780  268396 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:23:00.589289  268396 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:23:00.720106  268396 docker.go:234] disabling docker service ...
	I1212 01:23:00.720218  268396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:23:00.745255  268396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:23:00.759375  268396 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:23:00.868954  268396 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:23:00.986823  268396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:23:01.000303  268396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:23:01.020766  268396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 01:23:01.035916  268396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 01:23:01.047757  268396 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 01:23:01.047841  268396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 01:23:01.057581  268396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 01:23:01.066936  268396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 01:23:01.076194  268396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 01:23:01.087446  268396 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:23:01.095840  268396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 01:23:01.105081  268396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 01:23:01.114811  268396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 01:23:01.124779  268396 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:23:01.133902  268396 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:23:01.143787  268396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:23:01.268708  268396 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 01:23:01.371121  268396 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1212 01:23:01.371244  268396 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1212 01:23:01.376096  268396 start.go:564] Will wait 60s for crictl version
	I1212 01:23:01.376229  268396 ssh_runner.go:195] Run: which crictl
	I1212 01:23:01.381423  268396 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 01:23:01.408454  268396 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1212 01:23:01.408590  268396 ssh_runner.go:195] Run: containerd --version
	I1212 01:23:01.430441  268396 ssh_runner.go:195] Run: containerd --version
	I1212 01:23:01.460991  268396 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1212 01:23:01.463982  268396 cli_runner.go:164] Run: docker network inspect no-preload-361053 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 01:23:01.481955  268396 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1212 01:23:01.485970  268396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:23:01.496248  268396 kubeadm.go:884] updating cluster {Name:no-preload-361053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-361053 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:23:01.496361  268396 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 01:23:01.496418  268396 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:23:01.524907  268396 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1212 01:23:01.524933  268396 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 01:23:01.524972  268396 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:23:01.525170  268396 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 01:23:01.525272  268396 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 01:23:01.525364  268396 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 01:23:01.525459  268396 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 01:23:01.525562  268396 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1212 01:23:01.525649  268396 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1212 01:23:01.525752  268396 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1212 01:23:01.527213  268396 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 01:23:01.528229  268396 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 01:23:01.528504  268396 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 01:23:01.528739  268396 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1212 01:23:01.528968  268396 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1212 01:23:01.529160  268396 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1212 01:23:01.529384  268396 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 01:23:01.529560  268396 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:23:01.765531  268396 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1212 01:23:01.765607  268396 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1212 01:23:01.802870  268396 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
	I1212 01:23:01.802949  268396 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 01:23:01.808583  268396 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1212 01:23:01.808700  268396 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1212 01:23:01.817579  268396 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1212 01:23:01.817685  268396 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
	I1212 01:23:01.817734  268396 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1212 01:23:01.817822  268396 ssh_runner.go:195] Run: which crictl
	I1212 01:23:01.817901  268396 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 01:23:01.819579  268396 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
	I1212 01:23:01.819698  268396 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 01:23:01.830145  268396 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
	I1212 01:23:01.830265  268396 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 01:23:01.848326  268396 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1212 01:23:01.848422  268396 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 01:23:01.848518  268396 ssh_runner.go:195] Run: which crictl
	I1212 01:23:01.848650  268396 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1212 01:23:01.848689  268396 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1212 01:23:01.848754  268396 ssh_runner.go:195] Run: which crictl
	I1212 01:23:01.876030  268396 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1212 01:23:01.876069  268396 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 01:23:01.876121  268396 ssh_runner.go:195] Run: which crictl
	I1212 01:23:01.876191  268396 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1212 01:23:01.885481  268396 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1212 01:23:01.885556  268396 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1212 01:23:01.891402  268396 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1212 01:23:01.891445  268396 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 01:23:01.891501  268396 ssh_runner.go:195] Run: which crictl
	I1212 01:23:01.905165  268396 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1212 01:23:01.905263  268396 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 01:23:01.905346  268396 ssh_runner.go:195] Run: which crictl
	I1212 01:23:01.905496  268396 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1212 01:23:01.905605  268396 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 01:23:01.936123  268396 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1212 01:23:01.936217  268396 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 01:23:01.940820  268396 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1212 01:23:01.940869  268396 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1212 01:23:01.940937  268396 ssh_runner.go:195] Run: which crictl
	I1212 01:23:01.941034  268396 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 01:23:01.992640  268396 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 01:23:01.995620  268396 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1212 01:23:01.999055  268396 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 01:23:02.037154  268396 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1212 01:23:02.037339  268396 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 01:23:02.089045  268396 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 01:23:02.089202  268396 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1212 01:23:02.135956  268396 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1212 01:23:02.135956  268396 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 01:23:02.136076  268396 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1212 01:23:02.154278  268396 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1212 01:23:02.154352  268396 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1212 01:23:02.154441  268396 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1212 01:23:02.204958  268396 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1212 01:23:02.204971  268396 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1212 01:23:02.254137  268396 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1212 01:23:02.254220  268396 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1212 01:23:02.254293  268396 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1212 01:23:02.254353  268396 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1212 01:23:02.254407  268396 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1212 01:23:02.262100  268396 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1212 01:23:02.262142  268396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1212 01:23:02.262211  268396 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1212 01:23:02.262288  268396 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1212 01:23:02.312813  268396 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1212 01:23:02.312918  268396 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1212 01:23:02.313010  268396 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1212 01:23:02.316608  268396 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1212 01:23:02.316716  268396 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1212 01:23:02.336437  268396 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1212 01:23:02.336497  268396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1212 01:23:02.336604  268396 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1212 01:23:02.336713  268396 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1212 01:23:02.336792  268396 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1212 01:23:02.336808  268396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1212 01:23:02.336860  268396 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1212 01:23:02.336871  268396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	I1212 01:23:02.437384  268396 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1212 01:23:02.437429  268396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1212 01:23:02.437546  268396 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1212 01:23:02.437633  268396 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1212 01:23:02.565645  268396 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1212 01:23:02.565692  268396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	I1212 01:23:02.565756  268396 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1212 01:23:02.572292  268396 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1212 01:23:02.572340  268396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	W1212 01:23:02.869361  268396 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1212 01:23:02.869680  268396 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1212 01:23:02.869856  268396 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:23:02.997314  268396 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1212 01:23:02.997357  268396 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:23:02.997411  268396 ssh_runner.go:195] Run: which crictl
	I1212 01:23:03.008392  268396 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1212 01:23:03.008536  268396 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1212 01:23:03.073851  268396 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:23:04.382840  268396 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.308950881s)
	I1212 01:23:04.382982  268396 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:23:04.383143  268396 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.3745889s)
	I1212 01:23:04.383175  268396 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1212 01:23:04.383235  268396 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1212 01:23:04.383300  268396 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1212 01:23:04.412617  268396 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:23:05.384133  268396 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.000796038s)
	I1212 01:23:05.384200  268396 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1212 01:23:05.384209  268396 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1212 01:23:05.384255  268396 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1212 01:23:05.384329  268396 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1212 01:23:05.384332  268396 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1212 01:23:06.477683  268396 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.09330415s)
	I1212 01:23:06.477708  268396 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1212 01:23:06.477726  268396 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1212 01:23:06.477774  268396 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1212 01:23:06.477722  268396 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.093277139s)
	I1212 01:23:06.477824  268396 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1212 01:23:06.477844  268396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1212 01:23:07.593845  268396 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.116048337s)
	I1212 01:23:07.593914  268396 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1212 01:23:07.593948  268396 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1212 01:23:07.594034  268396 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	I1212 01:23:09.135788  268396 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0: (1.541713834s)
	I1212 01:23:09.135817  268396 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1212 01:23:09.135843  268396 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1212 01:23:09.135893  268396 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1212 01:23:10.236563  268396 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.100643518s)
	I1212 01:23:10.236587  268396 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1212 01:23:10.236610  268396 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1212 01:23:10.236659  268396 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1212 01:23:10.604957  268396 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1212 01:23:10.604994  268396 cache_images.go:125] Successfully loaded all cached images
	I1212 01:23:10.605000  268396 cache_images.go:94] duration metric: took 9.080051583s to LoadCachedImages
	I1212 01:23:10.605012  268396 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1212 01:23:10.605107  268396 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-361053 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-361053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:23:10.605173  268396 ssh_runner.go:195] Run: sudo crictl info
	I1212 01:23:10.634185  268396 cni.go:84] Creating CNI manager for ""
	I1212 01:23:10.634210  268396 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 01:23:10.634226  268396 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 01:23:10.634258  268396 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-361053 NodeName:no-preload-361053 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 01:23:10.634377  268396 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-361053"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:23:10.634456  268396 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 01:23:10.643988  268396 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1212 01:23:10.644056  268396 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 01:23:10.653048  268396 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1212 01:23:10.653145  268396 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1212 01:23:10.653934  268396 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22101-2343/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm
	I1212 01:23:10.653940  268396 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/22101-2343/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet
	I1212 01:23:10.658421  268396 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1212 01:23:10.658503  268396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1212 01:23:11.759195  268396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:23:11.773211  268396 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1212 01:23:11.778496  268396 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1212 01:23:11.778536  268396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1212 01:23:12.470431  268396 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1212 01:23:12.475971  268396 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1212 01:23:12.476012  268396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1212 01:23:12.879975  268396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:23:12.888164  268396 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1212 01:23:12.903106  268396 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 01:23:12.917759  268396 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1212 01:23:12.932565  268396 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1212 01:23:12.937247  268396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:23:12.950063  268396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:23:13.095958  268396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:23:13.115518  268396 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053 for IP: 192.168.85.2
	I1212 01:23:13.115543  268396 certs.go:195] generating shared ca certs ...
	I1212 01:23:13.115561  268396 certs.go:227] acquiring lock for ca certs: {Name:mk18ed2fce74cbc4ee01c0f71e2dbdd98ccce1cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:23:13.115696  268396 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key
	I1212 01:23:13.115750  268396 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key
	I1212 01:23:13.115762  268396 certs.go:257] generating profile certs ...
	I1212 01:23:13.115821  268396 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/client.key
	I1212 01:23:13.115837  268396 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/client.crt with IP's: []
	I1212 01:23:13.646620  268396 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/client.crt ...
	I1212 01:23:13.646653  268396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/client.crt: {Name:mk7cd8bcfe53e706529059ae3c036c9b0161d4de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:23:13.646847  268396 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/client.key ...
	I1212 01:23:13.646860  268396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/client.key: {Name:mkd14845bf8420e47d052f767355b4c15b02f5f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:23:13.646948  268396 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/apiserver.key.40e68572
	I1212 01:23:13.646965  268396 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/apiserver.crt.40e68572 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1212 01:23:13.742893  268396 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/apiserver.crt.40e68572 ...
	I1212 01:23:13.742921  268396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/apiserver.crt.40e68572: {Name:mkef89f40f871a3bac01d500758bea0ce54cc064 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:23:13.743100  268396 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/apiserver.key.40e68572 ...
	I1212 01:23:13.743120  268396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/apiserver.key.40e68572: {Name:mk9becb532d021e538859cf5c405b95f53452708 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:23:13.743198  268396 certs.go:382] copying /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/apiserver.crt.40e68572 -> /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/apiserver.crt
	I1212 01:23:13.743278  268396 certs.go:386] copying /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/apiserver.key.40e68572 -> /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/apiserver.key
	I1212 01:23:13.743346  268396 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/proxy-client.key
	I1212 01:23:13.743366  268396 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/proxy-client.crt with IP's: []
	I1212 01:23:13.909344  268396 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/proxy-client.crt ...
	I1212 01:23:13.909376  268396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/proxy-client.crt: {Name:mkd8678eecb6dbdc7b0e49943eb844dfd1104f3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:23:13.909560  268396 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/proxy-client.key ...
	I1212 01:23:13.909575  268396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/proxy-client.key: {Name:mkc09cbbdbddea2f3e9a5b494d257bb3b7cd8548 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:23:13.909783  268396 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem (1338 bytes)
	W1212 01:23:13.909835  268396 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290_empty.pem, impossibly tiny 0 bytes
	I1212 01:23:13.909850  268396 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 01:23:13.909877  268396 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem (1082 bytes)
	I1212 01:23:13.909907  268396 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:23:13.909935  268396 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem (1675 bytes)
	I1212 01:23:13.909984  268396 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem (1708 bytes)
	I1212 01:23:13.910557  268396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:23:13.928661  268396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 01:23:13.946873  268396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:23:13.967117  268396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:23:13.986368  268396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 01:23:14.009999  268396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 01:23:14.035092  268396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:23:14.055690  268396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 01:23:14.077819  268396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:23:14.096768  268396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem --> /usr/share/ca-certificates/4290.pem (1338 bytes)
	I1212 01:23:14.116990  268396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /usr/share/ca-certificates/42902.pem (1708 bytes)
	I1212 01:23:14.135268  268396 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:23:14.148762  268396 ssh_runner.go:195] Run: openssl version
	I1212 01:23:14.155910  268396 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:23:14.164073  268396 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 01:23:14.172329  268396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:23:14.177064  268396 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:23:14.177191  268396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:23:14.218973  268396 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 01:23:14.227678  268396 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 01:23:14.238113  268396 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4290.pem
	I1212 01:23:14.246797  268396 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4290.pem /etc/ssl/certs/4290.pem
	I1212 01:23:14.255328  268396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4290.pem
	I1212 01:23:14.259849  268396 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:06 /usr/share/ca-certificates/4290.pem
	I1212 01:23:14.259918  268396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4290.pem
	I1212 01:23:14.301757  268396 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 01:23:14.310112  268396 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4290.pem /etc/ssl/certs/51391683.0
	I1212 01:23:14.318447  268396 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42902.pem
	I1212 01:23:14.327204  268396 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42902.pem /etc/ssl/certs/42902.pem
	I1212 01:23:14.335604  268396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42902.pem
	I1212 01:23:14.340026  268396 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:06 /usr/share/ca-certificates/42902.pem
	I1212 01:23:14.340124  268396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42902.pem
	I1212 01:23:14.381964  268396 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 01:23:14.390721  268396 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/42902.pem /etc/ssl/certs/3ec20f2e.0
	I1212 01:23:14.398979  268396 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:23:14.403176  268396 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 01:23:14.403236  268396 kubeadm.go:401] StartCluster: {Name:no-preload-361053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-361053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:23:14.403309  268396 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1212 01:23:14.403377  268396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:23:14.433913  268396 cri.go:89] found id: ""
	I1212 01:23:14.433988  268396 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:23:14.442583  268396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:23:14.451177  268396 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 01:23:14.451247  268396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:23:14.459273  268396 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:23:14.459302  268396 kubeadm.go:158] found existing configuration files:
	
	I1212 01:23:14.459355  268396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:23:14.467672  268396 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:23:14.467787  268396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:23:14.475686  268396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:23:14.483635  268396 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:23:14.483746  268396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:23:14.491492  268396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:23:14.500004  268396 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:23:14.500077  268396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:23:14.508097  268396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:23:14.516735  268396 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:23:14.516799  268396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:23:14.525466  268396 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 01:23:14.588126  268396 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 01:23:14.588482  268396 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 01:23:14.664151  268396 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 01:23:14.664227  268396 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 01:23:14.664268  268396 kubeadm.go:319] OS: Linux
	I1212 01:23:14.664317  268396 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 01:23:14.664370  268396 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 01:23:14.664421  268396 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 01:23:14.664483  268396 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 01:23:14.664537  268396 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 01:23:14.664601  268396 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 01:23:14.664651  268396 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 01:23:14.664702  268396 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 01:23:14.664758  268396 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 01:23:14.730495  268396 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:23:14.730697  268396 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:23:14.730836  268396 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:23:14.736459  268396 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:23:14.743674  268396 out.go:252]   - Generating certificates and keys ...
	I1212 01:23:14.743784  268396 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 01:23:14.743854  268396 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 01:23:15.091184  268396 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 01:23:15.334893  268396 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 01:23:15.957917  268396 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 01:23:16.192131  268396 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 01:23:16.480659  268396 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 01:23:16.481058  268396 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-361053] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1212 01:23:16.557493  268396 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 01:23:16.557860  268396 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-361053] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1212 01:23:16.666066  268396 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 01:23:16.877818  268396 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 01:23:16.950526  268396 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 01:23:16.950845  268396 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:23:17.397231  268396 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:23:17.664539  268396 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 01:23:18.536264  268396 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:23:18.667728  268396 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:23:19.038194  268396 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:23:19.039282  268396 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:23:19.042336  268396 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:23:19.045973  268396 out.go:252]   - Booting up control plane ...
	I1212 01:23:19.046073  268396 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:23:19.046150  268396 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:23:19.049863  268396 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:23:19.083974  268396 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:23:19.084133  268396 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 01:23:19.091720  268396 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 01:23:19.092073  268396 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:23:19.092119  268396 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 01:23:19.227461  268396 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 01:23:19.227589  268396 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 01:27:19.226422  268396 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000962088s
	I1212 01:27:19.226635  268396 kubeadm.go:319] 
	I1212 01:27:19.226702  268396 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 01:27:19.226735  268396 kubeadm.go:319] 	- The kubelet is not running
	I1212 01:27:19.226840  268396 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 01:27:19.226847  268396 kubeadm.go:319] 
	I1212 01:27:19.227012  268396 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 01:27:19.227062  268396 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 01:27:19.227095  268396 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 01:27:19.227099  268396 kubeadm.go:319] 
	I1212 01:27:19.231490  268396 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 01:27:19.231948  268396 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 01:27:19.232070  268396 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:27:19.232304  268396 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 01:27:19.232318  268396 kubeadm.go:319] 
	W1212 01:27:19.232506  268396 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-361053] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-361053] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000962088s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-361053] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-361053] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000962088s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 01:27:19.232600  268396 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1212 01:27:19.232891  268396 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 01:27:19.641819  268396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:27:19.655717  268396 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 01:27:19.655786  268396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:27:19.664059  268396 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:27:19.664079  268396 kubeadm.go:158] found existing configuration files:
	
	I1212 01:27:19.664128  268396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:27:19.672510  268396 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:27:19.672575  268396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:27:19.680342  268396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:27:19.688315  268396 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:27:19.688383  268396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:27:19.696209  268396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:27:19.704155  268396 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:27:19.704219  268396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:27:19.711899  268396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:27:19.719844  268396 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:27:19.719910  268396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:27:19.727687  268396 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 01:27:19.860959  268396 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 01:27:19.861382  268396 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 01:27:19.927748  268396 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:31:22.255083  268396 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 01:31:22.255118  268396 kubeadm.go:319] 
	I1212 01:31:22.255185  268396 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 01:31:22.259224  268396 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 01:31:22.259291  268396 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 01:31:22.259384  268396 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 01:31:22.259445  268396 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 01:31:22.259485  268396 kubeadm.go:319] OS: Linux
	I1212 01:31:22.259534  268396 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 01:31:22.259586  268396 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 01:31:22.259638  268396 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 01:31:22.259689  268396 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 01:31:22.259742  268396 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 01:31:22.259793  268396 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 01:31:22.259842  268396 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 01:31:22.259894  268396 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 01:31:22.259943  268396 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 01:31:22.260016  268396 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:31:22.260113  268396 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:31:22.260208  268396 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:31:22.260274  268396 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:31:22.264965  268396 out.go:252]   - Generating certificates and keys ...
	I1212 01:31:22.265061  268396 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 01:31:22.265129  268396 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 01:31:22.265205  268396 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:31:22.265267  268396 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:31:22.265335  268396 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:31:22.265389  268396 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 01:31:22.265452  268396 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:31:22.265511  268396 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:31:22.265581  268396 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:31:22.265657  268396 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:31:22.265698  268396 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 01:31:22.265754  268396 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:31:22.265805  268396 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:31:22.265863  268396 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 01:31:22.265922  268396 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:31:22.265985  268396 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:31:22.266040  268396 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:31:22.266122  268396 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:31:22.266188  268396 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:31:22.269011  268396 out.go:252]   - Booting up control plane ...
	I1212 01:31:22.269113  268396 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:31:22.269196  268396 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:31:22.269313  268396 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:31:22.269458  268396 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:31:22.269587  268396 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 01:31:22.269697  268396 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 01:31:22.269820  268396 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:31:22.269866  268396 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 01:31:22.270050  268396 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 01:31:22.270170  268396 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 01:31:22.270256  268396 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001218388s
	I1212 01:31:22.270267  268396 kubeadm.go:319] 
	I1212 01:31:22.270326  268396 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 01:31:22.270369  268396 kubeadm.go:319] 	- The kubelet is not running
	I1212 01:31:22.270483  268396 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 01:31:22.270503  268396 kubeadm.go:319] 
	I1212 01:31:22.270616  268396 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 01:31:22.270657  268396 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 01:31:22.270717  268396 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 01:31:22.270757  268396 kubeadm.go:319] 
	I1212 01:31:22.270858  268396 kubeadm.go:403] duration metric: took 8m7.867624823s to StartCluster
	I1212 01:31:22.270898  268396 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:31:22.270968  268396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:31:22.306968  268396 cri.go:89] found id: ""
	I1212 01:31:22.307036  268396 logs.go:282] 0 containers: []
	W1212 01:31:22.307047  268396 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:31:22.307054  268396 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:31:22.307137  268396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:31:22.339653  268396 cri.go:89] found id: ""
	I1212 01:31:22.339689  268396 logs.go:282] 0 containers: []
	W1212 01:31:22.339700  268396 logs.go:284] No container was found matching "etcd"
	I1212 01:31:22.339706  268396 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:31:22.339765  268396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:31:22.368586  268396 cri.go:89] found id: ""
	I1212 01:31:22.368607  268396 logs.go:282] 0 containers: []
	W1212 01:31:22.368615  268396 logs.go:284] No container was found matching "coredns"
	I1212 01:31:22.368621  268396 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:31:22.368680  268396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:31:22.393839  268396 cri.go:89] found id: ""
	I1212 01:31:22.393912  268396 logs.go:282] 0 containers: []
	W1212 01:31:22.393934  268396 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:31:22.393960  268396 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:31:22.394035  268396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:31:22.419583  268396 cri.go:89] found id: ""
	I1212 01:31:22.419608  268396 logs.go:282] 0 containers: []
	W1212 01:31:22.419616  268396 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:31:22.419622  268396 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:31:22.419680  268396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:31:22.448415  268396 cri.go:89] found id: ""
	I1212 01:31:22.448443  268396 logs.go:282] 0 containers: []
	W1212 01:31:22.448451  268396 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:31:22.448459  268396 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:31:22.448517  268396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:31:22.476913  268396 cri.go:89] found id: ""
	I1212 01:31:22.476939  268396 logs.go:282] 0 containers: []
	W1212 01:31:22.476947  268396 logs.go:284] No container was found matching "kindnet"
	I1212 01:31:22.476956  268396 logs.go:123] Gathering logs for kubelet ...
	I1212 01:31:22.476983  268396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:31:22.533409  268396 logs.go:123] Gathering logs for dmesg ...
	I1212 01:31:22.533444  268396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:31:22.548368  268396 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:31:22.548401  268396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:31:22.614148  268396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:31:22.605942    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:22.606490    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:22.608232    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:22.608633    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:22.610124    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:31:22.605942    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:22.606490    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:22.608232    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:22.608633    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:22.610124    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:31:22.614173  268396 logs.go:123] Gathering logs for containerd ...
	I1212 01:31:22.614185  268396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:31:22.656511  268396 logs.go:123] Gathering logs for container status ...
	I1212 01:31:22.656543  268396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:31:22.687238  268396 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001218388s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 01:31:22.687348  268396 out.go:285] * 
	* 
	W1212 01:31:22.687426  268396 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001218388s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001218388s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 01:31:22.687441  268396 out.go:285] * 
	* 
	W1212 01:31:22.689841  268396 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 01:31:22.695133  268396 out.go:203] 
	W1212 01:31:22.698069  268396 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001218388s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001218388s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 01:31:22.698114  268396 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 01:31:22.698136  268396 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 01:31:22.701165  268396 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-arm64 start -p no-preload-361053 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-361053
helpers_test.go:244: (dbg) docker inspect no-preload-361053:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd",
	        "Created": "2025-12-12T01:22:53.604240637Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 268910,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T01:22:53.788312247Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd/hostname",
	        "HostsPath": "/var/lib/docker/containers/68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd/hosts",
	        "LogPath": "/var/lib/docker/containers/68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd/68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd-json.log",
	        "Name": "/no-preload-361053",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-361053:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-361053",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd",
	                "LowerDir": "/var/lib/docker/overlay2/9169bb0f18d4483461fe8404c60d5cb61a51170a9dcf8e503c392f8c9d01abee-init/diff:/var/lib/docker/overlay2/4c0e5370e4fd7b4e6c6a79620ef377d7d55826709cd277e0cfa49c6005af0314/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9169bb0f18d4483461fe8404c60d5cb61a51170a9dcf8e503c392f8c9d01abee/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9169bb0f18d4483461fe8404c60d5cb61a51170a9dcf8e503c392f8c9d01abee/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9169bb0f18d4483461fe8404c60d5cb61a51170a9dcf8e503c392f8c9d01abee/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-361053",
	                "Source": "/var/lib/docker/volumes/no-preload-361053/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-361053",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-361053",
	                "name.minikube.sigs.k8s.io": "no-preload-361053",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6d73be6e1f66a3f7c6d96dca30aa8c1389affdac21224c7034e0e227db3e8397",
	            "SandboxKey": "/var/run/docker/netns/6d73be6e1f66",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-361053": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:21:58:59:ae:af",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ee086efedb5c3900c251cd31f9316499408470e70a7d486e64d8b91c6bf60cd7",
	                    "EndpointID": "ae778ff101bac87a43f1ea9fade85a6810900e2d9b74a07254c68fbc89db3f07",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-361053",
	                        "68256fe8de3b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-361053 -n no-preload-361053
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-361053 -n no-preload-361053: exit status 6 (361.620151ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 01:31:23.160990  284133 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-361053" does not appear in /home/jenkins/minikube-integration/22101-2343/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-361053 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ addons  │ enable dashboard -p default-k8s-diff-port-971096 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:21 UTC │ 12 Dec 25 01:21 UTC │
	│ start   │ -p default-k8s-diff-port-971096 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:21 UTC │ 12 Dec 25 01:22 UTC │
	│ image   │ old-k8s-version-147581 image list --format=json                                                                                                                                                                                                            │ old-k8s-version-147581       │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ pause   │ -p old-k8s-version-147581 --alsologtostderr -v=1                                                                                                                                                                                                           │ old-k8s-version-147581       │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ unpause │ -p old-k8s-version-147581 --alsologtostderr -v=1                                                                                                                                                                                                           │ old-k8s-version-147581       │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ delete  │ -p old-k8s-version-147581                                                                                                                                                                                                                                  │ old-k8s-version-147581       │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ delete  │ -p old-k8s-version-147581                                                                                                                                                                                                                                  │ old-k8s-version-147581       │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ start   │ -p embed-certs-648696 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:23 UTC │
	│ image   │ default-k8s-diff-port-971096 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ pause   │ -p default-k8s-diff-port-971096 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ unpause │ -p default-k8s-diff-port-971096 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ delete  │ -p default-k8s-diff-port-971096                                                                                                                                                                                                                            │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ delete  │ -p default-k8s-diff-port-971096                                                                                                                                                                                                                            │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ delete  │ -p disable-driver-mounts-539158                                                                                                                                                                                                                            │ disable-driver-mounts-539158 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ start   │ -p no-preload-361053 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-361053            │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-648696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ stop    │ -p embed-certs-648696 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ addons  │ enable dashboard -p embed-certs-648696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ start   │ -p embed-certs-648696 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:24 UTC │
	│ image   │ embed-certs-648696 image list --format=json                                                                                                                                                                                                                │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ pause   │ -p embed-certs-648696 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ unpause │ -p embed-certs-648696 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ delete  │ -p embed-certs-648696                                                                                                                                                                                                                                      │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ delete  │ -p embed-certs-648696                                                                                                                                                                                                                                      │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ start   │ -p newest-cni-256959 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-256959            │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 01:25:10
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 01:25:10.610326  276743 out.go:360] Setting OutFile to fd 1 ...
	I1212 01:25:10.611013  276743 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:25:10.611028  276743 out.go:374] Setting ErrFile to fd 2...
	I1212 01:25:10.611033  276743 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:25:10.611296  276743 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	I1212 01:25:10.611727  276743 out.go:368] Setting JSON to false
	I1212 01:25:10.612585  276743 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7657,"bootTime":1765495054,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1212 01:25:10.612655  276743 start.go:143] virtualization:  
	I1212 01:25:10.616537  276743 out.go:179] * [newest-cni-256959] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 01:25:10.620721  276743 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 01:25:10.620800  276743 notify.go:221] Checking for updates...
	I1212 01:25:10.627029  276743 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 01:25:10.630074  276743 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 01:25:10.633037  276743 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	I1212 01:25:10.635913  276743 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 01:25:10.638863  276743 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 01:25:10.642342  276743 config.go:182] Loaded profile config "no-preload-361053": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 01:25:10.642439  276743 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 01:25:10.663336  276743 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 01:25:10.663491  276743 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:25:10.731650  276743 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 01:25:10.720876374 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:25:10.731750  276743 docker.go:319] overlay module found
	I1212 01:25:10.734985  276743 out.go:179] * Using the docker driver based on user configuration
	I1212 01:25:10.738034  276743 start.go:309] selected driver: docker
	I1212 01:25:10.738050  276743 start.go:927] validating driver "docker" against <nil>
	I1212 01:25:10.738062  276743 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 01:25:10.738778  276743 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:25:10.802614  276743 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 01:25:10.791452052 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:25:10.802835  276743 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1212 01:25:10.802872  276743 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1212 01:25:10.803130  276743 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1212 01:25:10.806190  276743 out.go:179] * Using Docker driver with root privileges
	I1212 01:25:10.809707  276743 cni.go:84] Creating CNI manager for ""
	I1212 01:25:10.809779  276743 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 01:25:10.809793  276743 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 01:25:10.809867  276743 start.go:353] cluster config:
	{Name:newest-cni-256959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:25:10.813089  276743 out.go:179] * Starting "newest-cni-256959" primary control-plane node in "newest-cni-256959" cluster
	I1212 01:25:10.815885  276743 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1212 01:25:10.818744  276743 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 01:25:10.821553  276743 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 01:25:10.821612  276743 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1212 01:25:10.821626  276743 cache.go:65] Caching tarball of preloaded images
	I1212 01:25:10.821716  276743 preload.go:238] Found /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1212 01:25:10.821731  276743 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1212 01:25:10.821841  276743 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/config.json ...
	I1212 01:25:10.821864  276743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/config.json: {Name:mk4998d8ef384508a1b134495f81d7fc826b1990 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:25:10.822019  276743 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 01:25:10.842165  276743 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 01:25:10.842191  276743 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 01:25:10.842204  276743 cache.go:243] Successfully downloaded all kic artifacts
	I1212 01:25:10.842235  276743 start.go:360] acquireMachinesLock for newest-cni-256959: {Name:mke4c35c218ad59b1da2c46074b57e71134fc7be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:25:10.842335  276743 start.go:364] duration metric: took 80.822µs to acquireMachinesLock for "newest-cni-256959"
	I1212 01:25:10.842366  276743 start.go:93] Provisioning new machine with config: &{Name:newest-cni-256959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1212 01:25:10.842439  276743 start.go:125] createHost starting for "" (driver="docker")
	I1212 01:25:10.846610  276743 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1212 01:25:10.846848  276743 start.go:159] libmachine.API.Create for "newest-cni-256959" (driver="docker")
	I1212 01:25:10.846884  276743 client.go:173] LocalClient.Create starting
	I1212 01:25:10.846956  276743 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem
	I1212 01:25:10.847041  276743 main.go:143] libmachine: Decoding PEM data...
	I1212 01:25:10.847062  276743 main.go:143] libmachine: Parsing certificate...
	I1212 01:25:10.847105  276743 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem
	I1212 01:25:10.847126  276743 main.go:143] libmachine: Decoding PEM data...
	I1212 01:25:10.847142  276743 main.go:143] libmachine: Parsing certificate...
	I1212 01:25:10.847512  276743 cli_runner.go:164] Run: docker network inspect newest-cni-256959 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 01:25:10.866623  276743 cli_runner.go:211] docker network inspect newest-cni-256959 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 01:25:10.866711  276743 network_create.go:284] running [docker network inspect newest-cni-256959] to gather additional debugging logs...
	I1212 01:25:10.866732  276743 cli_runner.go:164] Run: docker network inspect newest-cni-256959
	W1212 01:25:10.882830  276743 cli_runner.go:211] docker network inspect newest-cni-256959 returned with exit code 1
	I1212 01:25:10.882862  276743 network_create.go:287] error running [docker network inspect newest-cni-256959]: docker network inspect newest-cni-256959: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-256959 not found
	I1212 01:25:10.882876  276743 network_create.go:289] output of [docker network inspect newest-cni-256959]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-256959 not found
	
	** /stderr **
	I1212 01:25:10.883058  276743 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 01:25:10.899622  276743 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4cd687b06342 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:a2:e8:c8:87:d3:0a} reservation:<nil>}
	I1212 01:25:10.899939  276743 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-c02c16721c9d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3e:e7:06:63:2c:e9} reservation:<nil>}
	I1212 01:25:10.900288  276743 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-805b07ff58c0 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:be:18:35:7a:03:02} reservation:<nil>}
	I1212 01:25:10.900688  276743 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a309a0}
	I1212 01:25:10.900712  276743 network_create.go:124] attempt to create docker network newest-cni-256959 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1212 01:25:10.900767  276743 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-256959 newest-cni-256959
	I1212 01:25:10.956774  276743 network_create.go:108] docker network newest-cni-256959 192.168.76.0/24 created
	I1212 01:25:10.956809  276743 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-256959" container
	I1212 01:25:10.956884  276743 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 01:25:10.973399  276743 cli_runner.go:164] Run: docker volume create newest-cni-256959 --label name.minikube.sigs.k8s.io=newest-cni-256959 --label created_by.minikube.sigs.k8s.io=true
	I1212 01:25:10.995879  276743 oci.go:103] Successfully created a docker volume newest-cni-256959
	I1212 01:25:10.995970  276743 cli_runner.go:164] Run: docker run --rm --name newest-cni-256959-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-256959 --entrypoint /usr/bin/test -v newest-cni-256959:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1212 01:25:11.527236  276743 oci.go:107] Successfully prepared a docker volume newest-cni-256959
	I1212 01:25:11.527311  276743 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 01:25:11.527325  276743 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 01:25:11.527417  276743 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-256959:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 01:25:15.366008  276743 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-256959:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.838540611s)
	I1212 01:25:15.366040  276743 kic.go:203] duration metric: took 3.838711624s to extract preloaded images to volume ...
	W1212 01:25:15.366202  276743 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1212 01:25:15.366316  276743 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 01:25:15.418308  276743 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-256959 --name newest-cni-256959 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-256959 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-256959 --network newest-cni-256959 --ip 192.168.76.2 --volume newest-cni-256959:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1212 01:25:15.701138  276743 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Running}}
	I1212 01:25:15.722831  276743 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Status}}
	I1212 01:25:15.744776  276743 cli_runner.go:164] Run: docker exec newest-cni-256959 stat /var/lib/dpkg/alternatives/iptables
	I1212 01:25:15.800337  276743 oci.go:144] the created container "newest-cni-256959" has a running status.
	I1212 01:25:15.800363  276743 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa...
	I1212 01:25:16.255229  276743 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 01:25:16.278394  276743 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Status}}
	I1212 01:25:16.296982  276743 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 01:25:16.297007  276743 kic_runner.go:114] Args: [docker exec --privileged newest-cni-256959 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 01:25:16.336579  276743 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Status}}
	I1212 01:25:16.356755  276743 machine.go:94] provisionDockerMachine start ...
	I1212 01:25:16.356843  276743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:25:16.374168  276743 main.go:143] libmachine: Using SSH client type: native
	I1212 01:25:16.374501  276743 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1212 01:25:16.374511  276743 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 01:25:16.375249  276743 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42232->127.0.0.1:33093: read: connection reset by peer
	I1212 01:25:19.530494  276743 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-256959
	
	I1212 01:25:19.530520  276743 ubuntu.go:182] provisioning hostname "newest-cni-256959"
	I1212 01:25:19.530584  276743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:25:19.548139  276743 main.go:143] libmachine: Using SSH client type: native
	I1212 01:25:19.548459  276743 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1212 01:25:19.548475  276743 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-256959 && echo "newest-cni-256959" | sudo tee /etc/hostname
	I1212 01:25:19.704022  276743 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-256959
	
	I1212 01:25:19.704112  276743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:25:19.721641  276743 main.go:143] libmachine: Using SSH client type: native
	I1212 01:25:19.721955  276743 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1212 01:25:19.721980  276743 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-256959' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-256959/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-256959' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:25:19.879218  276743 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:25:19.879312  276743 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-2343/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-2343/.minikube}
	I1212 01:25:19.879367  276743 ubuntu.go:190] setting up certificates
	I1212 01:25:19.879397  276743 provision.go:84] configureAuth start
	I1212 01:25:19.879518  276743 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-256959
	I1212 01:25:19.896152  276743 provision.go:143] copyHostCerts
	I1212 01:25:19.896221  276743 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem, removing ...
	I1212 01:25:19.896234  276743 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem
	I1212 01:25:19.896315  276743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem (1675 bytes)
	I1212 01:25:19.896434  276743 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem, removing ...
	I1212 01:25:19.896445  276743 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem
	I1212 01:25:19.896476  276743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem (1082 bytes)
	I1212 01:25:19.896542  276743 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem, removing ...
	I1212 01:25:19.896551  276743 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem
	I1212 01:25:19.896577  276743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem (1123 bytes)
	I1212 01:25:19.896641  276743 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem org=jenkins.newest-cni-256959 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-256959]
	I1212 01:25:20.204760  276743 provision.go:177] copyRemoteCerts
	I1212 01:25:20.204827  276743 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:25:20.204875  276743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:25:20.224116  276743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:25:20.330622  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 01:25:20.348480  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 01:25:20.366287  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 01:25:20.383425  276743 provision.go:87] duration metric: took 503.997002ms to configureAuth
	I1212 01:25:20.383450  276743 ubuntu.go:206] setting minikube options for container-runtime
	I1212 01:25:20.383651  276743 config.go:182] Loaded profile config "newest-cni-256959": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 01:25:20.383658  276743 machine.go:97] duration metric: took 4.026884265s to provisionDockerMachine
	I1212 01:25:20.383665  276743 client.go:176] duration metric: took 9.536770098s to LocalClient.Create
	I1212 01:25:20.383678  276743 start.go:167] duration metric: took 9.536832859s to libmachine.API.Create "newest-cni-256959"
	I1212 01:25:20.383685  276743 start.go:293] postStartSetup for "newest-cni-256959" (driver="docker")
	I1212 01:25:20.383694  276743 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:25:20.383742  276743 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:25:20.383784  276743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:25:20.400325  276743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:25:20.507208  276743 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:25:20.510550  276743 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 01:25:20.510580  276743 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 01:25:20.510595  276743 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/addons for local assets ...
	I1212 01:25:20.510649  276743 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/files for local assets ...
	I1212 01:25:20.510733  276743 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem -> 42902.pem in /etc/ssl/certs
	I1212 01:25:20.510839  276743 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:25:20.518172  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /etc/ssl/certs/42902.pem (1708 bytes)
	I1212 01:25:20.536052  276743 start.go:296] duration metric: took 152.353471ms for postStartSetup
	I1212 01:25:20.536438  276743 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-256959
	I1212 01:25:20.555757  276743 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/config.json ...
	I1212 01:25:20.556035  276743 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 01:25:20.556076  276743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:25:20.580490  276743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:25:20.688189  276743 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 01:25:20.693057  276743 start.go:128] duration metric: took 9.850603168s to createHost
	I1212 01:25:20.693084  276743 start.go:83] releasing machines lock for "newest-cni-256959", held for 9.850734377s
	I1212 01:25:20.693172  276743 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-256959
	I1212 01:25:20.709859  276743 ssh_runner.go:195] Run: cat /version.json
	I1212 01:25:20.709914  276743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:25:20.710177  276743 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:25:20.710239  276743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:25:20.729457  276743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:25:20.741797  276743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:25:20.834710  276743 ssh_runner.go:195] Run: systemctl --version
	I1212 01:25:20.924313  276743 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:25:20.928847  276743 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:25:20.928951  276743 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:25:20.954175  276743 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1212 01:25:20.954200  276743 start.go:496] detecting cgroup driver to use...
	I1212 01:25:20.954231  276743 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 01:25:20.954281  276743 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 01:25:20.969656  276743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 01:25:20.982642  276743 docker.go:218] disabling cri-docker service (if available) ...
	I1212 01:25:20.982710  276743 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:25:20.999922  276743 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:25:21.020615  276743 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:25:21.141838  276743 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:25:21.263080  276743 docker.go:234] disabling docker service ...
	I1212 01:25:21.263148  276743 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:25:21.287246  276743 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:25:21.310750  276743 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:25:21.444187  276743 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:25:21.567374  276743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:25:21.580397  276743 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:25:21.594203  276743 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 01:25:21.603451  276743 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 01:25:21.612481  276743 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 01:25:21.612614  276743 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 01:25:21.621568  276743 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 01:25:21.630132  276743 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 01:25:21.639772  276743 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 01:25:21.648550  276743 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:25:21.656926  276743 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 01:25:21.666027  276743 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 01:25:21.675421  276743 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 01:25:21.684275  276743 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:25:21.692101  276743 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:25:21.699082  276743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:25:21.804978  276743 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 01:25:21.939895  276743 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1212 01:25:21.939976  276743 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1212 01:25:21.944016  276743 start.go:564] Will wait 60s for crictl version
	I1212 01:25:21.944158  276743 ssh_runner.go:195] Run: which crictl
	I1212 01:25:21.947592  276743 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 01:25:21.970388  276743 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1212 01:25:21.970506  276743 ssh_runner.go:195] Run: containerd --version
	I1212 01:25:21.989928  276743 ssh_runner.go:195] Run: containerd --version
	I1212 01:25:22.016197  276743 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1212 01:25:22.019317  276743 cli_runner.go:164] Run: docker network inspect newest-cni-256959 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 01:25:22.042635  276743 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1212 01:25:22.047510  276743 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:25:22.062564  276743 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1212 01:25:22.065397  276743 kubeadm.go:884] updating cluster {Name:newest-cni-256959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:25:22.065551  276743 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 01:25:22.065640  276743 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:25:22.102156  276743 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 01:25:22.102183  276743 containerd.go:534] Images already preloaded, skipping extraction
	I1212 01:25:22.102250  276743 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:25:22.129883  276743 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 01:25:22.129908  276743 cache_images.go:86] Images are preloaded, skipping loading
	I1212 01:25:22.129916  276743 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1212 01:25:22.130003  276743 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-256959 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:25:22.130072  276743 ssh_runner.go:195] Run: sudo crictl info
	I1212 01:25:22.157382  276743 cni.go:84] Creating CNI manager for ""
	I1212 01:25:22.157407  276743 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 01:25:22.157422  276743 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1212 01:25:22.157449  276743 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-256959 NodeName:newest-cni-256959 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 01:25:22.157566  276743 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-256959"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:25:22.157640  276743 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 01:25:22.165592  276743 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 01:25:22.165664  276743 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:25:22.173544  276743 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1212 01:25:22.186913  276743 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 01:25:22.199980  276743 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1212 01:25:22.212497  276743 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1212 01:25:22.216212  276743 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:25:22.226129  276743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:25:22.341565  276743 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:25:22.362735  276743 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959 for IP: 192.168.76.2
	I1212 01:25:22.362758  276743 certs.go:195] generating shared ca certs ...
	I1212 01:25:22.362774  276743 certs.go:227] acquiring lock for ca certs: {Name:mk18ed2fce74cbc4ee01c0f71e2dbdd98ccce1cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:25:22.362922  276743 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key
	I1212 01:25:22.362982  276743 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key
	I1212 01:25:22.363063  276743 certs.go:257] generating profile certs ...
	I1212 01:25:22.363128  276743 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/client.key
	I1212 01:25:22.363145  276743 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/client.crt with IP's: []
	I1212 01:25:23.043220  276743 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/client.crt ...
	I1212 01:25:23.043305  276743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/client.crt: {Name:mke800b4895a7f26c3f61118ac2a9636e3a9248a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:25:23.043557  276743 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/client.key ...
	I1212 01:25:23.043596  276743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/client.key: {Name:mkb2206776a08341de5b9d37086d859f3539aa54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:25:23.043743  276743 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.key.b05ecb93
	I1212 01:25:23.043783  276743 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.crt.b05ecb93 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1212 01:25:23.163980  276743 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.crt.b05ecb93 ...
	I1212 01:25:23.164017  276743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.crt.b05ecb93: {Name:mk05b9dd6b8930af6580fe78d40e6026f3e8847a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:25:23.164237  276743 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.key.b05ecb93 ...
	I1212 01:25:23.164254  276743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.key.b05ecb93: {Name:mk5e2ac6bbc37c39d5b319f8600a5d25e63c4a12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:25:23.164355  276743 certs.go:382] copying /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.crt.b05ecb93 -> /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.crt
	I1212 01:25:23.164449  276743 certs.go:386] copying /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.key.b05ecb93 -> /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.key
	I1212 01:25:23.164518  276743 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.key
	I1212 01:25:23.164541  276743 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.crt with IP's: []
	I1212 01:25:23.503416  276743 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.crt ...
	I1212 01:25:23.503453  276743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.crt: {Name:mka5a6a7cee07eb7c969d496d8aa380d667ba867 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:25:23.503635  276743 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.key ...
	I1212 01:25:23.503652  276743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.key: {Name:mkd0b1a9e86a7f90668157e83a73d06f56064ece Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:25:23.503848  276743 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem (1338 bytes)
	W1212 01:25:23.503900  276743 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290_empty.pem, impossibly tiny 0 bytes
	I1212 01:25:23.503913  276743 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 01:25:23.503965  276743 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem (1082 bytes)
	I1212 01:25:23.503999  276743 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:25:23.504032  276743 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem (1675 bytes)
	I1212 01:25:23.504080  276743 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem (1708 bytes)
	I1212 01:25:23.504696  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:25:23.524669  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 01:25:23.551586  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:25:23.572683  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:25:23.594234  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 01:25:23.612544  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 01:25:23.629869  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:25:23.647023  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 01:25:23.664698  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /usr/share/ca-certificates/42902.pem (1708 bytes)
	I1212 01:25:23.682123  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:25:23.699502  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem --> /usr/share/ca-certificates/4290.pem (1338 bytes)
	I1212 01:25:23.716689  276743 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:25:23.729602  276743 ssh_runner.go:195] Run: openssl version
	I1212 01:25:23.735703  276743 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4290.pem
	I1212 01:25:23.742851  276743 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4290.pem /etc/ssl/certs/4290.pem
	I1212 01:25:23.750077  276743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4290.pem
	I1212 01:25:23.753690  276743 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:06 /usr/share/ca-certificates/4290.pem
	I1212 01:25:23.753758  276743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4290.pem
	I1212 01:25:23.795206  276743 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 01:25:23.802639  276743 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4290.pem /etc/ssl/certs/51391683.0
	I1212 01:25:23.809951  276743 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42902.pem
	I1212 01:25:23.817139  276743 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42902.pem /etc/ssl/certs/42902.pem
	I1212 01:25:23.830841  276743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42902.pem
	I1212 01:25:23.834926  276743 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:06 /usr/share/ca-certificates/42902.pem
	I1212 01:25:23.835070  276743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42902.pem
	I1212 01:25:23.876422  276743 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 01:25:23.885502  276743 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/42902.pem /etc/ssl/certs/3ec20f2e.0
	I1212 01:25:23.893037  276743 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:25:23.900652  276743 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 01:25:23.908635  276743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:25:23.912614  276743 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:25:23.912690  276743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:25:23.956401  276743 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 01:25:23.964299  276743 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 01:25:23.971681  276743 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:25:23.975558  276743 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 01:25:23.975609  276743 kubeadm.go:401] StartCluster: {Name:newest-cni-256959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:25:23.975697  276743 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1212 01:25:23.975759  276743 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:25:24.003976  276743 cri.go:89] found id: ""
	I1212 01:25:24.004073  276743 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:25:24.014227  276743 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:25:24.022866  276743 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 01:25:24.022958  276743 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:25:24.031328  276743 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:25:24.031359  276743 kubeadm.go:158] found existing configuration files:
	
	I1212 01:25:24.031425  276743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:25:24.039632  276743 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:25:24.039710  276743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:25:24.047426  276743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:25:24.055269  276743 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:25:24.055386  276743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:25:24.062906  276743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:25:24.070757  276743 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:25:24.070846  276743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:25:24.078322  276743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:25:24.086235  276743 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:25:24.086340  276743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:25:24.093978  276743 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 01:25:24.130495  276743 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 01:25:24.130556  276743 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 01:25:24.204494  276743 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 01:25:24.204576  276743 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 01:25:24.204617  276743 kubeadm.go:319] OS: Linux
	I1212 01:25:24.204667  276743 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 01:25:24.204719  276743 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 01:25:24.204770  276743 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 01:25:24.204821  276743 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 01:25:24.204871  276743 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 01:25:24.204928  276743 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 01:25:24.204978  276743 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 01:25:24.205039  276743 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 01:25:24.205089  276743 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 01:25:24.274059  276743 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:25:24.274248  276743 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:25:24.274393  276743 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:25:24.281432  276743 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:25:24.288265  276743 out.go:252]   - Generating certificates and keys ...
	I1212 01:25:24.288438  276743 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 01:25:24.288544  276743 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 01:25:24.872395  276743 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 01:25:24.948048  276743 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 01:25:25.302518  276743 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 01:25:25.648856  276743 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 01:25:25.789938  276743 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 01:25:25.790397  276743 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-256959] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1212 01:25:26.099340  276743 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 01:25:26.099559  276743 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-256959] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1212 01:25:26.538607  276743 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 01:25:27.389042  276743 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 01:25:27.842473  276743 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 01:25:27.842877  276743 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:25:27.936371  276743 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:25:28.210661  276743 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 01:25:28.314836  276743 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:25:28.428208  276743 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:25:28.580595  276743 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:25:28.581418  276743 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:25:28.584199  276743 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:25:28.587820  276743 out.go:252]   - Booting up control plane ...
	I1212 01:25:28.587929  276743 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:25:28.588012  276743 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:25:28.589356  276743 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:25:28.605527  276743 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:25:28.605678  276743 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 01:25:28.613455  276743 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 01:25:28.614074  276743 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:25:28.614240  276743 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 01:25:28.755452  276743 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 01:25:28.755580  276743 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 01:27:19.226422  268396 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000962088s
	I1212 01:27:19.226635  268396 kubeadm.go:319] 
	I1212 01:27:19.226702  268396 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 01:27:19.226735  268396 kubeadm.go:319] 	- The kubelet is not running
	I1212 01:27:19.226840  268396 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 01:27:19.226847  268396 kubeadm.go:319] 
	I1212 01:27:19.227012  268396 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 01:27:19.227062  268396 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 01:27:19.227095  268396 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 01:27:19.227099  268396 kubeadm.go:319] 
	I1212 01:27:19.231490  268396 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 01:27:19.231948  268396 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 01:27:19.232070  268396 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:27:19.232304  268396 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 01:27:19.232318  268396 kubeadm.go:319] 
	W1212 01:27:19.232506  268396 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-361053] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-361053] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000962088s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 01:27:19.232600  268396 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1212 01:27:19.232891  268396 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 01:27:19.641819  268396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:27:19.655717  268396 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 01:27:19.655786  268396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:27:19.664059  268396 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:27:19.664079  268396 kubeadm.go:158] found existing configuration files:
	
	I1212 01:27:19.664128  268396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:27:19.672510  268396 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:27:19.672575  268396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:27:19.680342  268396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:27:19.688315  268396 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:27:19.688383  268396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:27:19.696209  268396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:27:19.704155  268396 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:27:19.704219  268396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:27:19.711899  268396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:27:19.719844  268396 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:27:19.719910  268396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:27:19.727687  268396 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 01:27:19.860959  268396 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 01:27:19.861382  268396 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 01:27:19.927748  268396 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:29:28.751512  276743 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000466498s
	I1212 01:29:28.751546  276743 kubeadm.go:319] 
	I1212 01:29:28.751605  276743 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 01:29:28.751644  276743 kubeadm.go:319] 	- The kubelet is not running
	I1212 01:29:28.751765  276743 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 01:29:28.751774  276743 kubeadm.go:319] 
	I1212 01:29:28.751883  276743 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 01:29:28.751919  276743 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 01:29:28.751963  276743 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 01:29:28.751972  276743 kubeadm.go:319] 
	I1212 01:29:28.757988  276743 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 01:29:28.758457  276743 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 01:29:28.758593  276743 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:29:28.759136  276743 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 01:29:28.759148  276743 kubeadm.go:319] 
	I1212 01:29:28.759296  276743 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1212 01:29:28.759448  276743 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-256959] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-256959] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000466498s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 01:29:28.759537  276743 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1212 01:29:29.171145  276743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:29:29.184061  276743 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 01:29:29.184150  276743 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:29:29.191792  276743 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:29:29.191813  276743 kubeadm.go:158] found existing configuration files:
	
	I1212 01:29:29.191872  276743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:29:29.199430  276743 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:29:29.199502  276743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:29:29.206493  276743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:29:29.213869  276743 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:29:29.213974  276743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:29:29.221146  276743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:29:29.228771  276743 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:29:29.228848  276743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:29:29.236019  276743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:29:29.243394  276743 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:29:29.243513  276743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:29:29.250760  276743 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 01:29:29.289424  276743 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 01:29:29.289525  276743 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 01:29:29.367460  276743 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 01:29:29.367532  276743 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 01:29:29.367572  276743 kubeadm.go:319] OS: Linux
	I1212 01:29:29.367620  276743 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 01:29:29.367668  276743 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 01:29:29.367716  276743 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 01:29:29.367765  276743 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 01:29:29.367814  276743 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 01:29:29.367862  276743 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 01:29:29.367907  276743 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 01:29:29.367956  276743 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 01:29:29.368003  276743 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 01:29:29.435977  276743 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:29:29.436136  276743 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:29:29.436234  276743 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:29:29.447414  276743 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:29:29.452711  276743 out.go:252]   - Generating certificates and keys ...
	I1212 01:29:29.452896  276743 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 01:29:29.452999  276743 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 01:29:29.453121  276743 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:29:29.453232  276743 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:29:29.453362  276743 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:29:29.453468  276743 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 01:29:29.453582  276743 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:29:29.453693  276743 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:29:29.453811  276743 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:29:29.453920  276743 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:29:29.453981  276743 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 01:29:29.454074  276743 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:29:29.661293  276743 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:29:29.926167  276743 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 01:29:30.228322  276743 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:29:30.325953  276743 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:29:30.468055  276743 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:29:30.469327  276743 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:29:30.473394  276743 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:29:30.478856  276743 out.go:252]   - Booting up control plane ...
	I1212 01:29:30.478958  276743 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:29:30.479046  276743 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:29:30.479115  276743 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:29:30.498715  276743 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:29:30.498819  276743 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 01:29:30.506278  276743 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 01:29:30.506595  276743 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:29:30.506638  276743 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 01:29:30.667439  276743 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 01:29:30.667560  276743 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 01:31:22.255083  268396 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 01:31:22.255118  268396 kubeadm.go:319] 
	I1212 01:31:22.255185  268396 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 01:31:22.259224  268396 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 01:31:22.259291  268396 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 01:31:22.259384  268396 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 01:31:22.259445  268396 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 01:31:22.259485  268396 kubeadm.go:319] OS: Linux
	I1212 01:31:22.259534  268396 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 01:31:22.259586  268396 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 01:31:22.259638  268396 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 01:31:22.259689  268396 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 01:31:22.259742  268396 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 01:31:22.259793  268396 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 01:31:22.259842  268396 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 01:31:22.259894  268396 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 01:31:22.259943  268396 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 01:31:22.260016  268396 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:31:22.260113  268396 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:31:22.260208  268396 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:31:22.260274  268396 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:31:22.264965  268396 out.go:252]   - Generating certificates and keys ...
	I1212 01:31:22.265061  268396 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 01:31:22.265129  268396 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 01:31:22.265205  268396 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:31:22.265267  268396 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:31:22.265335  268396 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:31:22.265389  268396 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 01:31:22.265452  268396 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:31:22.265511  268396 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:31:22.265581  268396 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:31:22.265657  268396 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:31:22.265698  268396 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 01:31:22.265754  268396 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:31:22.265805  268396 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:31:22.265863  268396 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 01:31:22.265922  268396 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:31:22.265985  268396 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:31:22.266040  268396 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:31:22.266122  268396 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:31:22.266188  268396 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:31:22.269011  268396 out.go:252]   - Booting up control plane ...
	I1212 01:31:22.269113  268396 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:31:22.269196  268396 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:31:22.269313  268396 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:31:22.269458  268396 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:31:22.269587  268396 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 01:31:22.269697  268396 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 01:31:22.269820  268396 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:31:22.269866  268396 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 01:31:22.270050  268396 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 01:31:22.270170  268396 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 01:31:22.270256  268396 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001218388s
	I1212 01:31:22.270267  268396 kubeadm.go:319] 
	I1212 01:31:22.270326  268396 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 01:31:22.270369  268396 kubeadm.go:319] 	- The kubelet is not running
	I1212 01:31:22.270483  268396 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 01:31:22.270503  268396 kubeadm.go:319] 
	I1212 01:31:22.270616  268396 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 01:31:22.270657  268396 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 01:31:22.270717  268396 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 01:31:22.270757  268396 kubeadm.go:319] 
	I1212 01:31:22.270858  268396 kubeadm.go:403] duration metric: took 8m7.867624823s to StartCluster
	I1212 01:31:22.270898  268396 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:31:22.270968  268396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:31:22.306968  268396 cri.go:89] found id: ""
	I1212 01:31:22.307036  268396 logs.go:282] 0 containers: []
	W1212 01:31:22.307047  268396 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:31:22.307054  268396 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:31:22.307137  268396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:31:22.339653  268396 cri.go:89] found id: ""
	I1212 01:31:22.339689  268396 logs.go:282] 0 containers: []
	W1212 01:31:22.339700  268396 logs.go:284] No container was found matching "etcd"
	I1212 01:31:22.339706  268396 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:31:22.339765  268396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:31:22.368586  268396 cri.go:89] found id: ""
	I1212 01:31:22.368607  268396 logs.go:282] 0 containers: []
	W1212 01:31:22.368615  268396 logs.go:284] No container was found matching "coredns"
	I1212 01:31:22.368621  268396 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:31:22.368680  268396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:31:22.393839  268396 cri.go:89] found id: ""
	I1212 01:31:22.393912  268396 logs.go:282] 0 containers: []
	W1212 01:31:22.393934  268396 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:31:22.393960  268396 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:31:22.394035  268396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:31:22.419583  268396 cri.go:89] found id: ""
	I1212 01:31:22.419608  268396 logs.go:282] 0 containers: []
	W1212 01:31:22.419616  268396 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:31:22.419622  268396 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:31:22.419680  268396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:31:22.448415  268396 cri.go:89] found id: ""
	I1212 01:31:22.448443  268396 logs.go:282] 0 containers: []
	W1212 01:31:22.448451  268396 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:31:22.448459  268396 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:31:22.448517  268396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:31:22.476913  268396 cri.go:89] found id: ""
	I1212 01:31:22.476939  268396 logs.go:282] 0 containers: []
	W1212 01:31:22.476947  268396 logs.go:284] No container was found matching "kindnet"
	I1212 01:31:22.476956  268396 logs.go:123] Gathering logs for kubelet ...
	I1212 01:31:22.476983  268396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:31:22.533409  268396 logs.go:123] Gathering logs for dmesg ...
	I1212 01:31:22.533444  268396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:31:22.548368  268396 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:31:22.548401  268396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:31:22.614148  268396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:31:22.605942    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:22.606490    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:22.608232    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:22.608633    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:22.610124    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:31:22.605942    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:22.606490    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:22.608232    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:22.608633    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:22.610124    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:31:22.614173  268396 logs.go:123] Gathering logs for containerd ...
	I1212 01:31:22.614185  268396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:31:22.656511  268396 logs.go:123] Gathering logs for container status ...
	I1212 01:31:22.656543  268396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:31:22.687238  268396 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001218388s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 01:31:22.687348  268396 out.go:285] * 
	W1212 01:31:22.687426  268396 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001218388s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 01:31:22.687441  268396 out.go:285] * 
	W1212 01:31:22.689841  268396 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 01:31:22.695133  268396 out.go:203] 
	W1212 01:31:22.698069  268396 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001218388s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 01:31:22.698114  268396 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 01:31:22.698136  268396 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 01:31:22.701165  268396 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 12 01:23:04 no-preload-361053 containerd[760]: time="2025-12-12T01:23:04.384035480Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:05 no-preload-361053 containerd[760]: time="2025-12-12T01:23:05.373522512Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 12 01:23:05 no-preload-361053 containerd[760]: time="2025-12-12T01:23:05.375857443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 12 01:23:05 no-preload-361053 containerd[760]: time="2025-12-12T01:23:05.392808161Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:05 no-preload-361053 containerd[760]: time="2025-12-12T01:23:05.393494461Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:06 no-preload-361053 containerd[760]: time="2025-12-12T01:23:06.469307146Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 12 01:23:06 no-preload-361053 containerd[760]: time="2025-12-12T01:23:06.471770579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 12 01:23:06 no-preload-361053 containerd[760]: time="2025-12-12T01:23:06.487399898Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:06 no-preload-361053 containerd[760]: time="2025-12-12T01:23:06.488294224Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:07 no-preload-361053 containerd[760]: time="2025-12-12T01:23:07.584315785Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 12 01:23:07 no-preload-361053 containerd[760]: time="2025-12-12T01:23:07.586361959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 12 01:23:07 no-preload-361053 containerd[760]: time="2025-12-12T01:23:07.593980543Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:07 no-preload-361053 containerd[760]: time="2025-12-12T01:23:07.594678428Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:09 no-preload-361053 containerd[760]: time="2025-12-12T01:23:09.125818664Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 12 01:23:09 no-preload-361053 containerd[760]: time="2025-12-12T01:23:09.128180286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 12 01:23:09 no-preload-361053 containerd[760]: time="2025-12-12T01:23:09.138535463Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:09 no-preload-361053 containerd[760]: time="2025-12-12T01:23:09.139822900Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:10 no-preload-361053 containerd[760]: time="2025-12-12T01:23:10.221236720Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 12 01:23:10 no-preload-361053 containerd[760]: time="2025-12-12T01:23:10.223326176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 12 01:23:10 no-preload-361053 containerd[760]: time="2025-12-12T01:23:10.237305395Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:10 no-preload-361053 containerd[760]: time="2025-12-12T01:23:10.238695242Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:10 no-preload-361053 containerd[760]: time="2025-12-12T01:23:10.594370471Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 12 01:23:10 no-preload-361053 containerd[760]: time="2025-12-12T01:23:10.596617443Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 12 01:23:10 no-preload-361053 containerd[760]: time="2025-12-12T01:23:10.603750918Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:10 no-preload-361053 containerd[760]: time="2025-12-12T01:23:10.604059262Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:31:23.794625    5563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:23.795495    5563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:23.797479    5563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:23.799150    5563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:23.799606    5563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec11 23:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014465] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.504479] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.038126] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.726220] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.947343] kauditd_printk_skb: 36 callbacks suppressed
	[Dec12 00:40] hrtimer: interrupt took 11339963 ns
	
	
	==> kernel <==
	 01:31:23 up  2:13,  0 user,  load average: 0.37, 1.22, 1.92
	Linux no-preload-361053 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 01:31:20 no-preload-361053 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:31:21 no-preload-361053 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 12 01:31:21 no-preload-361053 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:31:21 no-preload-361053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:31:21 no-preload-361053 kubelet[5377]: E1212 01:31:21.578229    5377 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:31:21 no-preload-361053 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:31:21 no-preload-361053 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:31:22 no-preload-361053 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 12 01:31:22 no-preload-361053 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:31:22 no-preload-361053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:31:22 no-preload-361053 kubelet[5388]: E1212 01:31:22.344886    5388 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:31:22 no-preload-361053 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:31:22 no-preload-361053 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:31:23 no-preload-361053 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 12 01:31:23 no-preload-361053 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:31:23 no-preload-361053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:31:23 no-preload-361053 kubelet[5469]: E1212 01:31:23.081525    5469 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:31:23 no-preload-361053 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:31:23 no-preload-361053 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:31:23 no-preload-361053 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 12 01:31:23 no-preload-361053 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:31:23 no-preload-361053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:31:23 no-preload-361053 kubelet[5567]: E1212 01:31:23.853022    5567 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:31:23 no-preload-361053 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:31:23 no-preload-361053 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-361053 -n no-preload-361053
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-361053 -n no-preload-361053: exit status 6 (362.23852ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 01:31:24.296031  284354 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-361053" does not appear in /home/jenkins/minikube-integration/22101-2343/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-361053" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (512.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (502.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-256959 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1212 01:25:50.111087    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/old-k8s-version-147581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:25:50.117491    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/old-k8s-version-147581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:25:50.128897    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/old-k8s-version-147581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:25:50.150383    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/old-k8s-version-147581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:25:50.191855    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/old-k8s-version-147581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:25:50.273315    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/old-k8s-version-147581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:25:50.434909    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/old-k8s-version-147581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:25:50.756795    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/old-k8s-version-147581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:25:51.398832    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/old-k8s-version-147581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:25:52.680733    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/old-k8s-version-147581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:25:55.243712    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/old-k8s-version-147581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:25:57.042940    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:26:00.365582    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/old-k8s-version-147581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:26:10.607660    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/old-k8s-version-147581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:26:18.648666    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/default-k8s-diff-port-971096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:26:18.655158    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/default-k8s-diff-port-971096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:26:18.666632    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/default-k8s-diff-port-971096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:26:18.688086    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/default-k8s-diff-port-971096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:26:18.729484    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/default-k8s-diff-port-971096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:26:18.810937    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/default-k8s-diff-port-971096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:26:18.972511    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/default-k8s-diff-port-971096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:26:19.294312    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/default-k8s-diff-port-971096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:26:19.936397    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/default-k8s-diff-port-971096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:26:21.218108    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/default-k8s-diff-port-971096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:26:23.779554    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/default-k8s-diff-port-971096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:26:28.901927    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/default-k8s-diff-port-971096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:26:31.089507    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/old-k8s-version-147581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:26:39.143430    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/default-k8s-diff-port-971096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:26:44.696447    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:26:59.624975    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/default-k8s-diff-port-971096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:27:12.051787    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/old-k8s-version-147581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:27:40.586370    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/default-k8s-diff-port-971096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:28:33.974427    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/old-k8s-version-147581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:28:35.116883    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:28:41.614881    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:28:52.052496    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:29:02.507810    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/default-k8s-diff-port-971096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:30:50.110088    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/old-k8s-version-147581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:30:57.046532    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:31:17.816204    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/old-k8s-version-147581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:31:18.648573    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/default-k8s-diff-port-971096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p newest-cni-256959 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m20.525110355s)

                                                
                                                
-- stdout --
	* [newest-cni-256959] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22101
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "newest-cni-256959" primary control-plane node in "newest-cni-256959" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 01:25:10.610326  276743 out.go:360] Setting OutFile to fd 1 ...
	I1212 01:25:10.611013  276743 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:25:10.611028  276743 out.go:374] Setting ErrFile to fd 2...
	I1212 01:25:10.611033  276743 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:25:10.611296  276743 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	I1212 01:25:10.611727  276743 out.go:368] Setting JSON to false
	I1212 01:25:10.612585  276743 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7657,"bootTime":1765495054,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1212 01:25:10.612655  276743 start.go:143] virtualization:  
	I1212 01:25:10.616537  276743 out.go:179] * [newest-cni-256959] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 01:25:10.620721  276743 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 01:25:10.620800  276743 notify.go:221] Checking for updates...
	I1212 01:25:10.627029  276743 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 01:25:10.630074  276743 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 01:25:10.633037  276743 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	I1212 01:25:10.635913  276743 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 01:25:10.638863  276743 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 01:25:10.642342  276743 config.go:182] Loaded profile config "no-preload-361053": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 01:25:10.642439  276743 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 01:25:10.663336  276743 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 01:25:10.663491  276743 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:25:10.731650  276743 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 01:25:10.720876374 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:25:10.731750  276743 docker.go:319] overlay module found
	I1212 01:25:10.734985  276743 out.go:179] * Using the docker driver based on user configuration
	I1212 01:25:10.738034  276743 start.go:309] selected driver: docker
	I1212 01:25:10.738050  276743 start.go:927] validating driver "docker" against <nil>
	I1212 01:25:10.738062  276743 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 01:25:10.738778  276743 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:25:10.802614  276743 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 01:25:10.791452052 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:25:10.802835  276743 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1212 01:25:10.802872  276743 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1212 01:25:10.803130  276743 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1212 01:25:10.806190  276743 out.go:179] * Using Docker driver with root privileges
	I1212 01:25:10.809707  276743 cni.go:84] Creating CNI manager for ""
	I1212 01:25:10.809779  276743 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 01:25:10.809793  276743 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 01:25:10.809867  276743 start.go:353] cluster config:
	{Name:newest-cni-256959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:25:10.813089  276743 out.go:179] * Starting "newest-cni-256959" primary control-plane node in "newest-cni-256959" cluster
	I1212 01:25:10.815885  276743 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1212 01:25:10.818744  276743 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 01:25:10.821553  276743 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 01:25:10.821612  276743 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1212 01:25:10.821626  276743 cache.go:65] Caching tarball of preloaded images
	I1212 01:25:10.821716  276743 preload.go:238] Found /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1212 01:25:10.821731  276743 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1212 01:25:10.821841  276743 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/config.json ...
	I1212 01:25:10.821864  276743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/config.json: {Name:mk4998d8ef384508a1b134495f81d7fc826b1990 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:25:10.822019  276743 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 01:25:10.842165  276743 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 01:25:10.842191  276743 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 01:25:10.842204  276743 cache.go:243] Successfully downloaded all kic artifacts
	I1212 01:25:10.842235  276743 start.go:360] acquireMachinesLock for newest-cni-256959: {Name:mke4c35c218ad59b1da2c46074b57e71134fc7be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:25:10.842335  276743 start.go:364] duration metric: took 80.822µs to acquireMachinesLock for "newest-cni-256959"
	I1212 01:25:10.842366  276743 start.go:93] Provisioning new machine with config: &{Name:newest-cni-256959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1212 01:25:10.842439  276743 start.go:125] createHost starting for "" (driver="docker")
	I1212 01:25:10.846610  276743 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1212 01:25:10.846848  276743 start.go:159] libmachine.API.Create for "newest-cni-256959" (driver="docker")
	I1212 01:25:10.846884  276743 client.go:173] LocalClient.Create starting
	I1212 01:25:10.846956  276743 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem
	I1212 01:25:10.847041  276743 main.go:143] libmachine: Decoding PEM data...
	I1212 01:25:10.847062  276743 main.go:143] libmachine: Parsing certificate...
	I1212 01:25:10.847105  276743 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem
	I1212 01:25:10.847126  276743 main.go:143] libmachine: Decoding PEM data...
	I1212 01:25:10.847142  276743 main.go:143] libmachine: Parsing certificate...
	I1212 01:25:10.847512  276743 cli_runner.go:164] Run: docker network inspect newest-cni-256959 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 01:25:10.866623  276743 cli_runner.go:211] docker network inspect newest-cni-256959 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 01:25:10.866711  276743 network_create.go:284] running [docker network inspect newest-cni-256959] to gather additional debugging logs...
	I1212 01:25:10.866732  276743 cli_runner.go:164] Run: docker network inspect newest-cni-256959
	W1212 01:25:10.882830  276743 cli_runner.go:211] docker network inspect newest-cni-256959 returned with exit code 1
	I1212 01:25:10.882862  276743 network_create.go:287] error running [docker network inspect newest-cni-256959]: docker network inspect newest-cni-256959: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-256959 not found
	I1212 01:25:10.882876  276743 network_create.go:289] output of [docker network inspect newest-cni-256959]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-256959 not found
	
	** /stderr **
	I1212 01:25:10.883058  276743 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 01:25:10.899622  276743 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4cd687b06342 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:a2:e8:c8:87:d3:0a} reservation:<nil>}
	I1212 01:25:10.899939  276743 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-c02c16721c9d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3e:e7:06:63:2c:e9} reservation:<nil>}
	I1212 01:25:10.900288  276743 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-805b07ff58c0 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:be:18:35:7a:03:02} reservation:<nil>}
	I1212 01:25:10.900688  276743 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a309a0}
	I1212 01:25:10.900712  276743 network_create.go:124] attempt to create docker network newest-cni-256959 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1212 01:25:10.900767  276743 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-256959 newest-cni-256959
	I1212 01:25:10.956774  276743 network_create.go:108] docker network newest-cni-256959 192.168.76.0/24 created
	I1212 01:25:10.956809  276743 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-256959" container
	I1212 01:25:10.956884  276743 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 01:25:10.973399  276743 cli_runner.go:164] Run: docker volume create newest-cni-256959 --label name.minikube.sigs.k8s.io=newest-cni-256959 --label created_by.minikube.sigs.k8s.io=true
	I1212 01:25:10.995879  276743 oci.go:103] Successfully created a docker volume newest-cni-256959
	I1212 01:25:10.995970  276743 cli_runner.go:164] Run: docker run --rm --name newest-cni-256959-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-256959 --entrypoint /usr/bin/test -v newest-cni-256959:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1212 01:25:11.527236  276743 oci.go:107] Successfully prepared a docker volume newest-cni-256959
	I1212 01:25:11.527311  276743 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 01:25:11.527325  276743 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 01:25:11.527417  276743 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-256959:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 01:25:15.366008  276743 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-256959:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.838540611s)
	I1212 01:25:15.366040  276743 kic.go:203] duration metric: took 3.838711624s to extract preloaded images to volume ...
	W1212 01:25:15.366202  276743 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1212 01:25:15.366316  276743 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 01:25:15.418308  276743 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-256959 --name newest-cni-256959 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-256959 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-256959 --network newest-cni-256959 --ip 192.168.76.2 --volume newest-cni-256959:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1212 01:25:15.701138  276743 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Running}}
	I1212 01:25:15.722831  276743 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Status}}
	I1212 01:25:15.744776  276743 cli_runner.go:164] Run: docker exec newest-cni-256959 stat /var/lib/dpkg/alternatives/iptables
	I1212 01:25:15.800337  276743 oci.go:144] the created container "newest-cni-256959" has a running status.
	I1212 01:25:15.800363  276743 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa...
	I1212 01:25:16.255229  276743 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 01:25:16.278394  276743 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Status}}
	I1212 01:25:16.296982  276743 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 01:25:16.297007  276743 kic_runner.go:114] Args: [docker exec --privileged newest-cni-256959 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 01:25:16.336579  276743 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Status}}
	I1212 01:25:16.356755  276743 machine.go:94] provisionDockerMachine start ...
	I1212 01:25:16.356843  276743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:25:16.374168  276743 main.go:143] libmachine: Using SSH client type: native
	I1212 01:25:16.374501  276743 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1212 01:25:16.374511  276743 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 01:25:16.375249  276743 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42232->127.0.0.1:33093: read: connection reset by peer
	I1212 01:25:19.530494  276743 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-256959
	
	I1212 01:25:19.530520  276743 ubuntu.go:182] provisioning hostname "newest-cni-256959"
	I1212 01:25:19.530584  276743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:25:19.548139  276743 main.go:143] libmachine: Using SSH client type: native
	I1212 01:25:19.548459  276743 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1212 01:25:19.548475  276743 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-256959 && echo "newest-cni-256959" | sudo tee /etc/hostname
	I1212 01:25:19.704022  276743 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-256959
	
	I1212 01:25:19.704112  276743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:25:19.721641  276743 main.go:143] libmachine: Using SSH client type: native
	I1212 01:25:19.721955  276743 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1212 01:25:19.721980  276743 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-256959' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-256959/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-256959' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:25:19.879218  276743 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:25:19.879312  276743 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-2343/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-2343/.minikube}
	I1212 01:25:19.879367  276743 ubuntu.go:190] setting up certificates
	I1212 01:25:19.879397  276743 provision.go:84] configureAuth start
	I1212 01:25:19.879518  276743 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-256959
	I1212 01:25:19.896152  276743 provision.go:143] copyHostCerts
	I1212 01:25:19.896221  276743 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem, removing ...
	I1212 01:25:19.896234  276743 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem
	I1212 01:25:19.896315  276743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem (1675 bytes)
	I1212 01:25:19.896434  276743 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem, removing ...
	I1212 01:25:19.896445  276743 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem
	I1212 01:25:19.896476  276743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem (1082 bytes)
	I1212 01:25:19.896542  276743 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem, removing ...
	I1212 01:25:19.896551  276743 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem
	I1212 01:25:19.896577  276743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem (1123 bytes)
	I1212 01:25:19.896641  276743 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem org=jenkins.newest-cni-256959 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-256959]
	I1212 01:25:20.204760  276743 provision.go:177] copyRemoteCerts
	I1212 01:25:20.204827  276743 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:25:20.204875  276743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:25:20.224116  276743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:25:20.330622  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 01:25:20.348480  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 01:25:20.366287  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 01:25:20.383425  276743 provision.go:87] duration metric: took 503.997002ms to configureAuth
	I1212 01:25:20.383450  276743 ubuntu.go:206] setting minikube options for container-runtime
	I1212 01:25:20.383651  276743 config.go:182] Loaded profile config "newest-cni-256959": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 01:25:20.383658  276743 machine.go:97] duration metric: took 4.026884265s to provisionDockerMachine
	I1212 01:25:20.383665  276743 client.go:176] duration metric: took 9.536770098s to LocalClient.Create
	I1212 01:25:20.383678  276743 start.go:167] duration metric: took 9.536832859s to libmachine.API.Create "newest-cni-256959"
	I1212 01:25:20.383685  276743 start.go:293] postStartSetup for "newest-cni-256959" (driver="docker")
	I1212 01:25:20.383694  276743 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:25:20.383742  276743 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:25:20.383784  276743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:25:20.400325  276743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:25:20.507208  276743 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:25:20.510550  276743 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 01:25:20.510580  276743 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 01:25:20.510595  276743 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/addons for local assets ...
	I1212 01:25:20.510649  276743 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/files for local assets ...
	I1212 01:25:20.510733  276743 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem -> 42902.pem in /etc/ssl/certs
	I1212 01:25:20.510839  276743 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:25:20.518172  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /etc/ssl/certs/42902.pem (1708 bytes)
	I1212 01:25:20.536052  276743 start.go:296] duration metric: took 152.353471ms for postStartSetup
	I1212 01:25:20.536438  276743 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-256959
	I1212 01:25:20.555757  276743 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/config.json ...
	I1212 01:25:20.556035  276743 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 01:25:20.556076  276743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:25:20.580490  276743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:25:20.688189  276743 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 01:25:20.693057  276743 start.go:128] duration metric: took 9.850603168s to createHost
	I1212 01:25:20.693084  276743 start.go:83] releasing machines lock for "newest-cni-256959", held for 9.850734377s
	I1212 01:25:20.693172  276743 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-256959
	I1212 01:25:20.709859  276743 ssh_runner.go:195] Run: cat /version.json
	I1212 01:25:20.709914  276743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:25:20.710177  276743 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:25:20.710239  276743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:25:20.729457  276743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:25:20.741797  276743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:25:20.834710  276743 ssh_runner.go:195] Run: systemctl --version
	I1212 01:25:20.924313  276743 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:25:20.928847  276743 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:25:20.928951  276743 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:25:20.954175  276743 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1212 01:25:20.954200  276743 start.go:496] detecting cgroup driver to use...
	I1212 01:25:20.954231  276743 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 01:25:20.954281  276743 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 01:25:20.969656  276743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 01:25:20.982642  276743 docker.go:218] disabling cri-docker service (if available) ...
	I1212 01:25:20.982710  276743 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:25:20.999922  276743 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:25:21.020615  276743 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:25:21.141838  276743 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:25:21.263080  276743 docker.go:234] disabling docker service ...
	I1212 01:25:21.263148  276743 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:25:21.287246  276743 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:25:21.310750  276743 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:25:21.444187  276743 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:25:21.567374  276743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:25:21.580397  276743 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:25:21.594203  276743 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 01:25:21.603451  276743 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 01:25:21.612481  276743 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 01:25:21.612614  276743 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 01:25:21.621568  276743 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 01:25:21.630132  276743 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 01:25:21.639772  276743 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 01:25:21.648550  276743 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:25:21.656926  276743 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 01:25:21.666027  276743 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 01:25:21.675421  276743 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 01:25:21.684275  276743 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:25:21.692101  276743 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:25:21.699082  276743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:25:21.804978  276743 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 01:25:21.939895  276743 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1212 01:25:21.939976  276743 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1212 01:25:21.944016  276743 start.go:564] Will wait 60s for crictl version
	I1212 01:25:21.944158  276743 ssh_runner.go:195] Run: which crictl
	I1212 01:25:21.947592  276743 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 01:25:21.970388  276743 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1212 01:25:21.970506  276743 ssh_runner.go:195] Run: containerd --version
	I1212 01:25:21.989928  276743 ssh_runner.go:195] Run: containerd --version
	I1212 01:25:22.016197  276743 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1212 01:25:22.019317  276743 cli_runner.go:164] Run: docker network inspect newest-cni-256959 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 01:25:22.042635  276743 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1212 01:25:22.047510  276743 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:25:22.062564  276743 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1212 01:25:22.065397  276743 kubeadm.go:884] updating cluster {Name:newest-cni-256959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:25:22.065551  276743 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 01:25:22.065640  276743 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:25:22.102156  276743 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 01:25:22.102183  276743 containerd.go:534] Images already preloaded, skipping extraction
	I1212 01:25:22.102250  276743 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:25:22.129883  276743 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 01:25:22.129908  276743 cache_images.go:86] Images are preloaded, skipping loading
	I1212 01:25:22.129916  276743 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1212 01:25:22.130003  276743 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-256959 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:25:22.130072  276743 ssh_runner.go:195] Run: sudo crictl info
	I1212 01:25:22.157382  276743 cni.go:84] Creating CNI manager for ""
	I1212 01:25:22.157407  276743 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 01:25:22.157422  276743 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1212 01:25:22.157449  276743 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-256959 NodeName:newest-cni-256959 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 01:25:22.157566  276743 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-256959"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:25:22.157640  276743 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 01:25:22.165592  276743 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 01:25:22.165664  276743 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:25:22.173544  276743 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1212 01:25:22.186913  276743 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 01:25:22.199980  276743 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1212 01:25:22.212497  276743 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1212 01:25:22.216212  276743 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:25:22.226129  276743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:25:22.341565  276743 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:25:22.362735  276743 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959 for IP: 192.168.76.2
	I1212 01:25:22.362758  276743 certs.go:195] generating shared ca certs ...
	I1212 01:25:22.362774  276743 certs.go:227] acquiring lock for ca certs: {Name:mk18ed2fce74cbc4ee01c0f71e2dbdd98ccce1cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:25:22.362922  276743 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key
	I1212 01:25:22.362982  276743 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key
	I1212 01:25:22.363063  276743 certs.go:257] generating profile certs ...
	I1212 01:25:22.363128  276743 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/client.key
	I1212 01:25:22.363145  276743 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/client.crt with IP's: []
	I1212 01:25:23.043220  276743 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/client.crt ...
	I1212 01:25:23.043305  276743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/client.crt: {Name:mke800b4895a7f26c3f61118ac2a9636e3a9248a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:25:23.043557  276743 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/client.key ...
	I1212 01:25:23.043596  276743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/client.key: {Name:mkb2206776a08341de5b9d37086d859f3539aa54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:25:23.043743  276743 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.key.b05ecb93
	I1212 01:25:23.043783  276743 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.crt.b05ecb93 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1212 01:25:23.163980  276743 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.crt.b05ecb93 ...
	I1212 01:25:23.164017  276743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.crt.b05ecb93: {Name:mk05b9dd6b8930af6580fe78d40e6026f3e8847a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:25:23.164237  276743 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.key.b05ecb93 ...
	I1212 01:25:23.164254  276743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.key.b05ecb93: {Name:mk5e2ac6bbc37c39d5b319f8600a5d25e63c4a12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:25:23.164355  276743 certs.go:382] copying /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.crt.b05ecb93 -> /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.crt
	I1212 01:25:23.164449  276743 certs.go:386] copying /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.key.b05ecb93 -> /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.key
	I1212 01:25:23.164518  276743 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.key
	I1212 01:25:23.164541  276743 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.crt with IP's: []
	I1212 01:25:23.503416  276743 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.crt ...
	I1212 01:25:23.503453  276743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.crt: {Name:mka5a6a7cee07eb7c969d496d8aa380d667ba867 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:25:23.503635  276743 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.key ...
	I1212 01:25:23.503652  276743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.key: {Name:mkd0b1a9e86a7f90668157e83a73d06f56064ece Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:25:23.503848  276743 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem (1338 bytes)
	W1212 01:25:23.503900  276743 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290_empty.pem, impossibly tiny 0 bytes
	I1212 01:25:23.503913  276743 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 01:25:23.503965  276743 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem (1082 bytes)
	I1212 01:25:23.503999  276743 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:25:23.504032  276743 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem (1675 bytes)
	I1212 01:25:23.504080  276743 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem (1708 bytes)
	I1212 01:25:23.504696  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:25:23.524669  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 01:25:23.551586  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:25:23.572683  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:25:23.594234  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 01:25:23.612544  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 01:25:23.629869  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:25:23.647023  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 01:25:23.664698  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /usr/share/ca-certificates/42902.pem (1708 bytes)
	I1212 01:25:23.682123  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:25:23.699502  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem --> /usr/share/ca-certificates/4290.pem (1338 bytes)
	I1212 01:25:23.716689  276743 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:25:23.729602  276743 ssh_runner.go:195] Run: openssl version
	I1212 01:25:23.735703  276743 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4290.pem
	I1212 01:25:23.742851  276743 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4290.pem /etc/ssl/certs/4290.pem
	I1212 01:25:23.750077  276743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4290.pem
	I1212 01:25:23.753690  276743 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:06 /usr/share/ca-certificates/4290.pem
	I1212 01:25:23.753758  276743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4290.pem
	I1212 01:25:23.795206  276743 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 01:25:23.802639  276743 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4290.pem /etc/ssl/certs/51391683.0
	I1212 01:25:23.809951  276743 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42902.pem
	I1212 01:25:23.817139  276743 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42902.pem /etc/ssl/certs/42902.pem
	I1212 01:25:23.830841  276743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42902.pem
	I1212 01:25:23.834926  276743 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:06 /usr/share/ca-certificates/42902.pem
	I1212 01:25:23.835070  276743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42902.pem
	I1212 01:25:23.876422  276743 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 01:25:23.885502  276743 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/42902.pem /etc/ssl/certs/3ec20f2e.0
	I1212 01:25:23.893037  276743 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:25:23.900652  276743 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 01:25:23.908635  276743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:25:23.912614  276743 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:25:23.912690  276743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:25:23.956401  276743 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 01:25:23.964299  276743 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 01:25:23.971681  276743 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:25:23.975558  276743 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 01:25:23.975609  276743 kubeadm.go:401] StartCluster: {Name:newest-cni-256959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:25:23.975697  276743 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1212 01:25:23.975759  276743 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:25:24.003976  276743 cri.go:89] found id: ""
	I1212 01:25:24.004073  276743 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:25:24.014227  276743 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:25:24.022866  276743 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 01:25:24.022958  276743 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:25:24.031328  276743 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:25:24.031359  276743 kubeadm.go:158] found existing configuration files:
	
	I1212 01:25:24.031425  276743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:25:24.039632  276743 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:25:24.039710  276743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:25:24.047426  276743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:25:24.055269  276743 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:25:24.055386  276743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:25:24.062906  276743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:25:24.070757  276743 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:25:24.070846  276743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:25:24.078322  276743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:25:24.086235  276743 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:25:24.086340  276743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:25:24.093978  276743 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 01:25:24.130495  276743 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 01:25:24.130556  276743 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 01:25:24.204494  276743 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 01:25:24.204576  276743 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 01:25:24.204617  276743 kubeadm.go:319] OS: Linux
	I1212 01:25:24.204667  276743 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 01:25:24.204719  276743 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 01:25:24.204770  276743 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 01:25:24.204821  276743 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 01:25:24.204871  276743 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 01:25:24.204928  276743 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 01:25:24.204978  276743 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 01:25:24.205039  276743 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 01:25:24.205089  276743 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 01:25:24.274059  276743 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:25:24.274248  276743 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:25:24.274393  276743 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:25:24.281432  276743 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:25:24.288265  276743 out.go:252]   - Generating certificates and keys ...
	I1212 01:25:24.288438  276743 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 01:25:24.288544  276743 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 01:25:24.872395  276743 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 01:25:24.948048  276743 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 01:25:25.302518  276743 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 01:25:25.648856  276743 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 01:25:25.789938  276743 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 01:25:25.790397  276743 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-256959] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1212 01:25:26.099340  276743 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 01:25:26.099559  276743 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-256959] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1212 01:25:26.538607  276743 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 01:25:27.389042  276743 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 01:25:27.842473  276743 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 01:25:27.842877  276743 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:25:27.936371  276743 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:25:28.210661  276743 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 01:25:28.314836  276743 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:25:28.428208  276743 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:25:28.580595  276743 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:25:28.581418  276743 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:25:28.584199  276743 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:25:28.587820  276743 out.go:252]   - Booting up control plane ...
	I1212 01:25:28.587929  276743 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:25:28.588012  276743 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:25:28.589356  276743 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:25:28.605527  276743 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:25:28.605678  276743 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 01:25:28.613455  276743 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 01:25:28.614074  276743 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:25:28.614240  276743 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 01:25:28.755452  276743 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 01:25:28.755580  276743 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 01:29:28.751512  276743 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000466498s
	I1212 01:29:28.751546  276743 kubeadm.go:319] 
	I1212 01:29:28.751605  276743 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 01:29:28.751644  276743 kubeadm.go:319] 	- The kubelet is not running
	I1212 01:29:28.751765  276743 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 01:29:28.751774  276743 kubeadm.go:319] 
	I1212 01:29:28.751883  276743 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 01:29:28.751919  276743 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 01:29:28.751963  276743 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 01:29:28.751972  276743 kubeadm.go:319] 
	I1212 01:29:28.757988  276743 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 01:29:28.758457  276743 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 01:29:28.758593  276743 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:29:28.759136  276743 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 01:29:28.759148  276743 kubeadm.go:319] 
	I1212 01:29:28.759296  276743 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1212 01:29:28.759448  276743 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-256959] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-256959] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000466498s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-256959] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-256959] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000466498s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 01:29:28.759537  276743 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1212 01:29:29.171145  276743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:29:29.184061  276743 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 01:29:29.184150  276743 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:29:29.191792  276743 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:29:29.191813  276743 kubeadm.go:158] found existing configuration files:
	
	I1212 01:29:29.191872  276743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:29:29.199430  276743 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:29:29.199502  276743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:29:29.206493  276743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:29:29.213869  276743 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:29:29.213974  276743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:29:29.221146  276743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:29:29.228771  276743 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:29:29.228848  276743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:29:29.236019  276743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:29:29.243394  276743 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:29:29.243513  276743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:29:29.250760  276743 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 01:29:29.289424  276743 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 01:29:29.289525  276743 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 01:29:29.367460  276743 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 01:29:29.367532  276743 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 01:29:29.367572  276743 kubeadm.go:319] OS: Linux
	I1212 01:29:29.367620  276743 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 01:29:29.367668  276743 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 01:29:29.367716  276743 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 01:29:29.367765  276743 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 01:29:29.367814  276743 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 01:29:29.367862  276743 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 01:29:29.367907  276743 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 01:29:29.367956  276743 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 01:29:29.368003  276743 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 01:29:29.435977  276743 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:29:29.436136  276743 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:29:29.436234  276743 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:29:29.447414  276743 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:29:29.452711  276743 out.go:252]   - Generating certificates and keys ...
	I1212 01:29:29.452896  276743 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 01:29:29.452999  276743 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 01:29:29.453121  276743 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:29:29.453232  276743 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:29:29.453362  276743 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:29:29.453468  276743 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 01:29:29.453582  276743 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:29:29.453693  276743 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:29:29.453811  276743 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:29:29.453920  276743 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:29:29.453981  276743 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 01:29:29.454074  276743 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:29:29.661293  276743 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:29:29.926167  276743 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 01:29:30.228322  276743 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:29:30.325953  276743 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:29:30.468055  276743 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:29:30.469327  276743 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:29:30.473394  276743 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:29:30.478856  276743 out.go:252]   - Booting up control plane ...
	I1212 01:29:30.478958  276743 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:29:30.479046  276743 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:29:30.479115  276743 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:29:30.498715  276743 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:29:30.498819  276743 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 01:29:30.506278  276743 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 01:29:30.506595  276743 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:29:30.506638  276743 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 01:29:30.667439  276743 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 01:29:30.667560  276743 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 01:33:30.663739  276743 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001214099s
	I1212 01:33:30.663765  276743 kubeadm.go:319] 
	I1212 01:33:30.663824  276743 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 01:33:30.664225  276743 kubeadm.go:319] 	- The kubelet is not running
	I1212 01:33:30.664463  276743 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 01:33:30.664470  276743 kubeadm.go:319] 
	I1212 01:33:30.664859  276743 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 01:33:30.664924  276743 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 01:33:30.664997  276743 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 01:33:30.665002  276743 kubeadm.go:319] 
	I1212 01:33:30.670247  276743 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 01:33:30.670737  276743 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 01:33:30.670876  276743 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:33:30.671132  276743 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 01:33:30.671145  276743 kubeadm.go:319] 
	I1212 01:33:30.671240  276743 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 01:33:30.671353  276743 kubeadm.go:403] duration metric: took 8m6.695748826s to StartCluster
	I1212 01:33:30.671412  276743 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:33:30.671514  276743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:33:30.695843  276743 cri.go:89] found id: ""
	I1212 01:33:30.695865  276743 logs.go:282] 0 containers: []
	W1212 01:33:30.695874  276743 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:33:30.695882  276743 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:33:30.695947  276743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:33:30.721320  276743 cri.go:89] found id: ""
	I1212 01:33:30.721346  276743 logs.go:282] 0 containers: []
	W1212 01:33:30.721355  276743 logs.go:284] No container was found matching "etcd"
	I1212 01:33:30.721361  276743 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:33:30.721447  276743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:33:30.745399  276743 cri.go:89] found id: ""
	I1212 01:33:30.745432  276743 logs.go:282] 0 containers: []
	W1212 01:33:30.745441  276743 logs.go:284] No container was found matching "coredns"
	I1212 01:33:30.745447  276743 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:33:30.745544  276743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:33:30.770020  276743 cri.go:89] found id: ""
	I1212 01:33:30.770053  276743 logs.go:282] 0 containers: []
	W1212 01:33:30.770062  276743 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:33:30.770082  276743 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:33:30.770166  276743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:33:30.793304  276743 cri.go:89] found id: ""
	I1212 01:33:30.793329  276743 logs.go:282] 0 containers: []
	W1212 01:33:30.793338  276743 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:33:30.793344  276743 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:33:30.793405  276743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:33:30.821216  276743 cri.go:89] found id: ""
	I1212 01:33:30.821286  276743 logs.go:282] 0 containers: []
	W1212 01:33:30.821295  276743 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:33:30.821302  276743 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:33:30.821374  276743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:33:30.849092  276743 cri.go:89] found id: ""
	I1212 01:33:30.849118  276743 logs.go:282] 0 containers: []
	W1212 01:33:30.849127  276743 logs.go:284] No container was found matching "kindnet"
	I1212 01:33:30.849160  276743 logs.go:123] Gathering logs for kubelet ...
	I1212 01:33:30.849178  276743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:33:30.908511  276743 logs.go:123] Gathering logs for dmesg ...
	I1212 01:33:30.908546  276743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:33:30.921702  276743 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:33:30.921728  276743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:33:30.986459  276743 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:33:30.978227    4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:33:30.978917    4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:33:30.980428    4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:33:30.980954    4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:33:30.982546    4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:33:30.978227    4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:33:30.978917    4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:33:30.980428    4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:33:30.980954    4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:33:30.982546    4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:33:30.986482  276743 logs.go:123] Gathering logs for containerd ...
	I1212 01:33:30.986494  276743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:33:31.026654  276743 logs.go:123] Gathering logs for container status ...
	I1212 01:33:31.026689  276743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:33:31.065726  276743 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001214099s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 01:33:31.065772  276743 out.go:285] * 
	* 
	W1212 01:33:31.065854  276743 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001214099s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001214099s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 01:33:31.065866  276743 out.go:285] * 
	* 
	W1212 01:33:31.067985  276743 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 01:33:31.073102  276743 out.go:203] 
	W1212 01:33:31.076901  276743 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001214099s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001214099s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 01:33:31.076950  276743 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 01:33:31.076972  276743 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 01:33:31.079948  276743 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-arm64 start -p newest-cni-256959 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-256959
helpers_test.go:244: (dbg) docker inspect newest-cni-256959:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "361f9c16c44aa15a36ece6d69f387f89ceb140e0cb337e6926bbb4b89286930b",
	        "Created": "2025-12-12T01:25:15.433462291Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 277175,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T01:25:15.494100167Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/361f9c16c44aa15a36ece6d69f387f89ceb140e0cb337e6926bbb4b89286930b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/361f9c16c44aa15a36ece6d69f387f89ceb140e0cb337e6926bbb4b89286930b/hostname",
	        "HostsPath": "/var/lib/docker/containers/361f9c16c44aa15a36ece6d69f387f89ceb140e0cb337e6926bbb4b89286930b/hosts",
	        "LogPath": "/var/lib/docker/containers/361f9c16c44aa15a36ece6d69f387f89ceb140e0cb337e6926bbb4b89286930b/361f9c16c44aa15a36ece6d69f387f89ceb140e0cb337e6926bbb4b89286930b-json.log",
	        "Name": "/newest-cni-256959",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-256959:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-256959",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "361f9c16c44aa15a36ece6d69f387f89ceb140e0cb337e6926bbb4b89286930b",
	                "LowerDir": "/var/lib/docker/overlay2/46aaa938e663ba5fb2a5ebb73f99ff9b7f70d7fe540cd86b216b975394456017-init/diff:/var/lib/docker/overlay2/4c0e5370e4fd7b4e6c6a79620ef377d7d55826709cd277e0cfa49c6005af0314/diff",
	                "MergedDir": "/var/lib/docker/overlay2/46aaa938e663ba5fb2a5ebb73f99ff9b7f70d7fe540cd86b216b975394456017/merged",
	                "UpperDir": "/var/lib/docker/overlay2/46aaa938e663ba5fb2a5ebb73f99ff9b7f70d7fe540cd86b216b975394456017/diff",
	                "WorkDir": "/var/lib/docker/overlay2/46aaa938e663ba5fb2a5ebb73f99ff9b7f70d7fe540cd86b216b975394456017/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-256959",
	                "Source": "/var/lib/docker/volumes/newest-cni-256959/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-256959",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-256959",
	                "name.minikube.sigs.k8s.io": "newest-cni-256959",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0b5fdda8c44db2b08c6f089f74d1eb8e7f3198550ce1c1afce9d13d69b6616c0",
	            "SandboxKey": "/var/run/docker/netns/0b5fdda8c44d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-256959": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:7e:47:09:12:8c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "08d9e23f02a4d7730d420d79f658bc1854aa3d62ee2a54a8cd34a455b2ba0431",
	                    "EndpointID": "cbdc9207c393fe6537a0e89077b0b631c11292137bcf558f1de9aba21fb8c57a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-256959",
	                        "361f9c16c44a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-256959 -n newest-cni-256959
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-256959 -n newest-cni-256959: exit status 6 (338.724649ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 01:33:31.500273  288946 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-256959" does not appear in /home/jenkins/minikube-integration/22101-2343/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-256959 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ unpause │ -p old-k8s-version-147581 --alsologtostderr -v=1                                                                                                                                                                                                           │ old-k8s-version-147581       │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ delete  │ -p old-k8s-version-147581                                                                                                                                                                                                                                  │ old-k8s-version-147581       │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ delete  │ -p old-k8s-version-147581                                                                                                                                                                                                                                  │ old-k8s-version-147581       │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ start   │ -p embed-certs-648696 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:23 UTC │
	│ image   │ default-k8s-diff-port-971096 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ pause   │ -p default-k8s-diff-port-971096 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ unpause │ -p default-k8s-diff-port-971096 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ delete  │ -p default-k8s-diff-port-971096                                                                                                                                                                                                                            │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ delete  │ -p default-k8s-diff-port-971096                                                                                                                                                                                                                            │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ delete  │ -p disable-driver-mounts-539158                                                                                                                                                                                                                            │ disable-driver-mounts-539158 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ start   │ -p no-preload-361053 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-361053            │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-648696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ stop    │ -p embed-certs-648696 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ addons  │ enable dashboard -p embed-certs-648696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ start   │ -p embed-certs-648696 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:24 UTC │
	│ image   │ embed-certs-648696 image list --format=json                                                                                                                                                                                                                │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ pause   │ -p embed-certs-648696 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ unpause │ -p embed-certs-648696 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ delete  │ -p embed-certs-648696                                                                                                                                                                                                                                      │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ delete  │ -p embed-certs-648696                                                                                                                                                                                                                                      │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ start   │ -p newest-cni-256959 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-256959            │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-361053 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-361053            │ jenkins │ v1.37.0 │ 12 Dec 25 01:31 UTC │                     │
	│ stop    │ -p no-preload-361053 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-361053            │ jenkins │ v1.37.0 │ 12 Dec 25 01:33 UTC │ 12 Dec 25 01:33 UTC │
	│ addons  │ enable dashboard -p no-preload-361053 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-361053            │ jenkins │ v1.37.0 │ 12 Dec 25 01:33 UTC │ 12 Dec 25 01:33 UTC │
	│ start   │ -p no-preload-361053 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-361053            │ jenkins │ v1.37.0 │ 12 Dec 25 01:33 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 01:33:10
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 01:33:10.429459  287206 out.go:360] Setting OutFile to fd 1 ...
	I1212 01:33:10.429581  287206 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:33:10.429595  287206 out.go:374] Setting ErrFile to fd 2...
	I1212 01:33:10.429600  287206 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:33:10.429856  287206 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	I1212 01:33:10.430230  287206 out.go:368] Setting JSON to false
	I1212 01:33:10.431163  287206 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8137,"bootTime":1765495054,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1212 01:33:10.431230  287206 start.go:143] virtualization:  
	I1212 01:33:10.434281  287206 out.go:179] * [no-preload-361053] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 01:33:10.438251  287206 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 01:33:10.438392  287206 notify.go:221] Checking for updates...
	I1212 01:33:10.444185  287206 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 01:33:10.447214  287206 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 01:33:10.450100  287206 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	I1212 01:33:10.452984  287206 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 01:33:10.455808  287206 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 01:33:10.459169  287206 config.go:182] Loaded profile config "no-preload-361053": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 01:33:10.459786  287206 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 01:33:10.491859  287206 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 01:33:10.491978  287206 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:33:10.546591  287206 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 01:33:10.536325619 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:33:10.546711  287206 docker.go:319] overlay module found
	I1212 01:33:10.549899  287206 out.go:179] * Using the docker driver based on existing profile
	I1212 01:33:10.552847  287206 start.go:309] selected driver: docker
	I1212 01:33:10.552889  287206 start.go:927] validating driver "docker" against &{Name:no-preload-361053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-361053 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:33:10.552995  287206 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 01:33:10.553716  287206 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:33:10.609060  287206 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 01:33:10.599832814 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:33:10.609400  287206 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 01:33:10.609433  287206 cni.go:84] Creating CNI manager for ""
	I1212 01:33:10.609483  287206 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 01:33:10.609530  287206 start.go:353] cluster config:
	{Name:no-preload-361053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-361053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:33:10.614478  287206 out.go:179] * Starting "no-preload-361053" primary control-plane node in "no-preload-361053" cluster
	I1212 01:33:10.617235  287206 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1212 01:33:10.620106  287206 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 01:33:10.622869  287206 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 01:33:10.622947  287206 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 01:33:10.623042  287206 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/config.json ...
	I1212 01:33:10.623355  287206 cache.go:107] acquiring lock: {Name:mk86e2a34ccf063d967d1b885c7693629a6b1892 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:33:10.623437  287206 cache.go:115] /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1212 01:33:10.623451  287206 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 115.784µs
	I1212 01:33:10.623465  287206 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1212 01:33:10.623481  287206 cache.go:107] acquiring lock: {Name:mk5046428d0406b9fe0bac2e28c1f5cc3958499f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:33:10.623518  287206 cache.go:115] /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1212 01:33:10.623527  287206 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 47.795µs
	I1212 01:33:10.623533  287206 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1212 01:33:10.623546  287206 cache.go:107] acquiring lock: {Name:mkc4887793edcc3c6296024b677e69f6ec1f79f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:33:10.623586  287206 cache.go:115] /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1212 01:33:10.623594  287206 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 49.322µs
	I1212 01:33:10.623600  287206 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1212 01:33:10.623610  287206 cache.go:107] acquiring lock: {Name:mkeb49560acf33aa79e308e0b71177927ef617d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:33:10.623642  287206 cache.go:115] /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1212 01:33:10.623650  287206 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 41.412µs
	I1212 01:33:10.623656  287206 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1212 01:33:10.623665  287206 cache.go:107] acquiring lock: {Name:mk2f0a11f2d527d62eb30e98e76f3a359773886b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:33:10.623691  287206 cache.go:115] /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1212 01:33:10.623696  287206 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 31.763µs
	I1212 01:33:10.623707  287206 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1212 01:33:10.623716  287206 cache.go:107] acquiring lock: {Name:mkf75c8f281a4d7578645f330ed9cc6bf48ab550 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:33:10.623747  287206 cache.go:115] /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1212 01:33:10.623755  287206 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 40.37µs
	I1212 01:33:10.623761  287206 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1212 01:33:10.623772  287206 cache.go:107] acquiring lock: {Name:mk1d6384b2d8bd32efb0f4661eaa55ecd74d4b80 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:33:10.623803  287206 cache.go:115] /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1212 01:33:10.623812  287206 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 42.807µs
	I1212 01:33:10.623817  287206 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1212 01:33:10.623321  287206 cache.go:107] acquiring lock: {Name:mk71cce41032f52f0748ef343d21f16410e3a1fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:33:10.623892  287206 cache.go:115] /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1212 01:33:10.623901  287206 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 595.264µs
	I1212 01:33:10.623907  287206 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1212 01:33:10.623913  287206 cache.go:87] Successfully saved all images to host disk.
	I1212 01:33:10.643214  287206 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 01:33:10.643238  287206 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 01:33:10.643258  287206 cache.go:243] Successfully downloaded all kic artifacts
	I1212 01:33:10.643289  287206 start.go:360] acquireMachinesLock for no-preload-361053: {Name:mk154c67822339b116aad3ea851214e3043755e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:33:10.643359  287206 start.go:364] duration metric: took 48.558µs to acquireMachinesLock for "no-preload-361053"
	I1212 01:33:10.643382  287206 start.go:96] Skipping create...Using existing machine configuration
	I1212 01:33:10.643393  287206 fix.go:54] fixHost starting: 
	I1212 01:33:10.643654  287206 cli_runner.go:164] Run: docker container inspect no-preload-361053 --format={{.State.Status}}
	I1212 01:33:10.661405  287206 fix.go:112] recreateIfNeeded on no-preload-361053: state=Stopped err=<nil>
	W1212 01:33:10.661436  287206 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 01:33:10.664651  287206 out.go:252] * Restarting existing docker container for "no-preload-361053" ...
	I1212 01:33:10.664755  287206 cli_runner.go:164] Run: docker start no-preload-361053
	I1212 01:33:10.948880  287206 cli_runner.go:164] Run: docker container inspect no-preload-361053 --format={{.State.Status}}
	I1212 01:33:10.974106  287206 kic.go:430] container "no-preload-361053" state is running.
	I1212 01:33:10.974585  287206 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-361053
	I1212 01:33:10.995294  287206 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/config.json ...
	I1212 01:33:10.995534  287206 machine.go:94] provisionDockerMachine start ...
	I1212 01:33:10.995608  287206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-361053
	I1212 01:33:11.019191  287206 main.go:143] libmachine: Using SSH client type: native
	I1212 01:33:11.019517  287206 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1212 01:33:11.019526  287206 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 01:33:11.020659  287206 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 01:33:14.170473  287206 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-361053
	
	I1212 01:33:14.170498  287206 ubuntu.go:182] provisioning hostname "no-preload-361053"
	I1212 01:33:14.170559  287206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-361053
	I1212 01:33:14.188567  287206 main.go:143] libmachine: Using SSH client type: native
	I1212 01:33:14.188886  287206 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1212 01:33:14.188903  287206 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-361053 && echo "no-preload-361053" | sudo tee /etc/hostname
	I1212 01:33:14.348144  287206 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-361053
	
	I1212 01:33:14.348281  287206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-361053
	I1212 01:33:14.367391  287206 main.go:143] libmachine: Using SSH client type: native
	I1212 01:33:14.367704  287206 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1212 01:33:14.367719  287206 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-361053' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-361053/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-361053' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:33:14.519558  287206 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:33:14.519628  287206 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-2343/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-2343/.minikube}
	I1212 01:33:14.519686  287206 ubuntu.go:190] setting up certificates
	I1212 01:33:14.519722  287206 provision.go:84] configureAuth start
	I1212 01:33:14.519802  287206 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-361053
	I1212 01:33:14.543680  287206 provision.go:143] copyHostCerts
	I1212 01:33:14.543759  287206 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem, removing ...
	I1212 01:33:14.543768  287206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem
	I1212 01:33:14.543857  287206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem (1082 bytes)
	I1212 01:33:14.543983  287206 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem, removing ...
	I1212 01:33:14.543989  287206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem
	I1212 01:33:14.544018  287206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem (1123 bytes)
	I1212 01:33:14.544096  287206 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem, removing ...
	I1212 01:33:14.544103  287206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem
	I1212 01:33:14.544130  287206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem (1675 bytes)
	I1212 01:33:14.544187  287206 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem org=jenkins.no-preload-361053 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-361053]
	I1212 01:33:14.844647  287206 provision.go:177] copyRemoteCerts
	I1212 01:33:14.844713  287206 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:33:14.844788  287206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-361053
	I1212 01:33:14.862571  287206 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/no-preload-361053/id_rsa Username:docker}
	I1212 01:33:14.966655  287206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 01:33:14.983728  287206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 01:33:15.000842  287206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 01:33:15.029620  287206 provision.go:87] duration metric: took 509.857308ms to configureAuth
	I1212 01:33:15.029672  287206 ubuntu.go:206] setting minikube options for container-runtime
	I1212 01:33:15.029880  287206 config.go:182] Loaded profile config "no-preload-361053": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 01:33:15.029895  287206 machine.go:97] duration metric: took 4.034345397s to provisionDockerMachine
	I1212 01:33:15.029904  287206 start.go:293] postStartSetup for "no-preload-361053" (driver="docker")
	I1212 01:33:15.029919  287206 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:33:15.029980  287206 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:33:15.030125  287206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-361053
	I1212 01:33:15.050338  287206 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/no-preload-361053/id_rsa Username:docker}
	I1212 01:33:15.155159  287206 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:33:15.158821  287206 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 01:33:15.158851  287206 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 01:33:15.158882  287206 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/addons for local assets ...
	I1212 01:33:15.159031  287206 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/files for local assets ...
	I1212 01:33:15.159139  287206 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem -> 42902.pem in /etc/ssl/certs
	I1212 01:33:15.159244  287206 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:33:15.167777  287206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /etc/ssl/certs/42902.pem (1708 bytes)
	I1212 01:33:15.188093  287206 start.go:296] duration metric: took 158.172096ms for postStartSetup
	I1212 01:33:15.188178  287206 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 01:33:15.188223  287206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-361053
	I1212 01:33:15.205702  287206 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/no-preload-361053/id_rsa Username:docker}
	I1212 01:33:15.308942  287206 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 01:33:15.313983  287206 fix.go:56] duration metric: took 4.670584581s for fixHost
	I1212 01:33:15.314011  287206 start.go:83] releasing machines lock for "no-preload-361053", held for 4.670641336s
	I1212 01:33:15.314079  287206 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-361053
	I1212 01:33:15.332761  287206 ssh_runner.go:195] Run: cat /version.json
	I1212 01:33:15.332818  287206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-361053
	I1212 01:33:15.333070  287206 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:33:15.333129  287206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-361053
	I1212 01:33:15.357886  287206 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/no-preload-361053/id_rsa Username:docker}
	I1212 01:33:15.373191  287206 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/no-preload-361053/id_rsa Username:docker}
	I1212 01:33:15.462718  287206 ssh_runner.go:195] Run: systemctl --version
	I1212 01:33:15.559571  287206 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:33:15.564162  287206 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:33:15.564271  287206 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:33:15.572295  287206 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 01:33:15.572323  287206 start.go:496] detecting cgroup driver to use...
	I1212 01:33:15.572376  287206 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 01:33:15.572457  287206 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 01:33:15.590265  287206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 01:33:15.603931  287206 docker.go:218] disabling cri-docker service (if available) ...
	I1212 01:33:15.604040  287206 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:33:15.619709  287206 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:33:15.633120  287206 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:33:15.745120  287206 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:33:15.856267  287206 docker.go:234] disabling docker service ...
	I1212 01:33:15.856362  287206 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:33:15.872142  287206 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:33:15.885538  287206 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:33:16.007318  287206 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:33:16.145250  287206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:33:16.158078  287206 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:33:16.173659  287206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 01:33:16.183387  287206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 01:33:16.192439  287206 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 01:33:16.192510  287206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 01:33:16.201771  287206 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 01:33:16.210383  287206 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 01:33:16.219183  287206 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 01:33:16.227825  287206 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:33:16.236204  287206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 01:33:16.245075  287206 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 01:33:16.253975  287206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 01:33:16.263051  287206 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:33:16.271105  287206 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:33:16.278773  287206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:33:16.395685  287206 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 01:33:16.502787  287206 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1212 01:33:16.502918  287206 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1212 01:33:16.506854  287206 start.go:564] Will wait 60s for crictl version
	I1212 01:33:16.506959  287206 ssh_runner.go:195] Run: which crictl
	I1212 01:33:16.510418  287206 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 01:33:16.536180  287206 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1212 01:33:16.536315  287206 ssh_runner.go:195] Run: containerd --version
	I1212 01:33:16.557674  287206 ssh_runner.go:195] Run: containerd --version
	I1212 01:33:16.585134  287206 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1212 01:33:16.587946  287206 cli_runner.go:164] Run: docker network inspect no-preload-361053 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 01:33:16.609867  287206 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1212 01:33:16.613918  287206 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:33:16.623744  287206 kubeadm.go:884] updating cluster {Name:no-preload-361053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-361053 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:33:16.623857  287206 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 01:33:16.623916  287206 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:33:16.650653  287206 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 01:33:16.650674  287206 cache_images.go:86] Images are preloaded, skipping loading
	I1212 01:33:16.650681  287206 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1212 01:33:16.650792  287206 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-361053 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-361053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:33:16.650867  287206 ssh_runner.go:195] Run: sudo crictl info
	I1212 01:33:16.676358  287206 cni.go:84] Creating CNI manager for ""
	I1212 01:33:16.676391  287206 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 01:33:16.676434  287206 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 01:33:16.676473  287206 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-361053 NodeName:no-preload-361053 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 01:33:16.676614  287206 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-361053"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:33:16.676692  287206 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 01:33:16.684549  287206 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 01:33:16.684631  287206 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:33:16.692278  287206 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1212 01:33:16.704678  287206 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 01:33:16.717453  287206 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1212 01:33:16.730349  287206 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1212 01:33:16.733792  287206 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:33:16.743217  287206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:33:16.879123  287206 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:33:16.896403  287206 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053 for IP: 192.168.85.2
	I1212 01:33:16.896424  287206 certs.go:195] generating shared ca certs ...
	I1212 01:33:16.896440  287206 certs.go:227] acquiring lock for ca certs: {Name:mk18ed2fce74cbc4ee01c0f71e2dbdd98ccce1cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:33:16.896611  287206 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key
	I1212 01:33:16.896673  287206 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key
	I1212 01:33:16.896685  287206 certs.go:257] generating profile certs ...
	I1212 01:33:16.896802  287206 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/client.key
	I1212 01:33:16.896884  287206 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/apiserver.key.40e68572
	I1212 01:33:16.896936  287206 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/proxy-client.key
	I1212 01:33:16.897085  287206 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem (1338 bytes)
	W1212 01:33:16.897122  287206 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290_empty.pem, impossibly tiny 0 bytes
	I1212 01:33:16.897140  287206 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 01:33:16.897182  287206 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem (1082 bytes)
	I1212 01:33:16.897211  287206 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:33:16.897253  287206 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem (1675 bytes)
	I1212 01:33:16.897323  287206 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem (1708 bytes)
	I1212 01:33:16.898045  287206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:33:16.917558  287206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 01:33:16.936420  287206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:33:16.954703  287206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:33:16.973775  287206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 01:33:16.993771  287206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 01:33:17.013800  287206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:33:17.032752  287206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 01:33:17.050974  287206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /usr/share/ca-certificates/42902.pem (1708 bytes)
	I1212 01:33:17.069067  287206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:33:17.086383  287206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem --> /usr/share/ca-certificates/4290.pem (1338 bytes)
	I1212 01:33:17.103777  287206 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:33:17.116500  287206 ssh_runner.go:195] Run: openssl version
	I1212 01:33:17.123250  287206 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42902.pem
	I1212 01:33:17.130602  287206 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42902.pem /etc/ssl/certs/42902.pem
	I1212 01:33:17.138023  287206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42902.pem
	I1212 01:33:17.141876  287206 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:06 /usr/share/ca-certificates/42902.pem
	I1212 01:33:17.141967  287206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42902.pem
	I1212 01:33:17.183155  287206 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 01:33:17.190531  287206 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:33:17.197720  287206 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 01:33:17.205424  287206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:33:17.209634  287206 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:33:17.209717  287206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:33:17.250661  287206 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 01:33:17.257979  287206 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4290.pem
	I1212 01:33:17.265084  287206 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4290.pem /etc/ssl/certs/4290.pem
	I1212 01:33:17.272550  287206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4290.pem
	I1212 01:33:17.276176  287206 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:06 /usr/share/ca-certificates/4290.pem
	I1212 01:33:17.276244  287206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4290.pem
	I1212 01:33:17.316946  287206 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 01:33:17.324295  287206 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:33:17.327973  287206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 01:33:17.368953  287206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 01:33:17.409868  287206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 01:33:17.453118  287206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 01:33:17.504589  287206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 01:33:17.551985  287206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 01:33:17.599976  287206 kubeadm.go:401] StartCluster: {Name:no-preload-361053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-361053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:33:17.600060  287206 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1212 01:33:17.600116  287206 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:33:17.627743  287206 cri.go:89] found id: ""
	I1212 01:33:17.627848  287206 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:33:17.635686  287206 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 01:33:17.635706  287206 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 01:33:17.635790  287206 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 01:33:17.642948  287206 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 01:33:17.643377  287206 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-361053" does not appear in /home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 01:33:17.643480  287206 kubeconfig.go:62] /home/jenkins/minikube-integration/22101-2343/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-361053" cluster setting kubeconfig missing "no-preload-361053" context setting]
	I1212 01:33:17.643818  287206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/kubeconfig: {Name:mk3e2cbffa089f0a26351f5f6a49b8ed3b66edd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:33:17.645054  287206 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 01:33:17.652754  287206 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1212 01:33:17.652837  287206 kubeadm.go:602] duration metric: took 17.12476ms to restartPrimaryControlPlane
	I1212 01:33:17.652856  287206 kubeadm.go:403] duration metric: took 52.888265ms to StartCluster
	I1212 01:33:17.652873  287206 settings.go:142] acquiring lock: {Name:mk6dd4250df69aeba4752e9f33aeef37272375c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:33:17.652935  287206 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 01:33:17.654183  287206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/kubeconfig: {Name:mk3e2cbffa089f0a26351f5f6a49b8ed3b66edd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:33:17.654577  287206 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1212 01:33:17.656196  287206 config.go:182] Loaded profile config "no-preload-361053": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 01:33:17.656293  287206 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 01:33:17.656907  287206 addons.go:70] Setting storage-provisioner=true in profile "no-preload-361053"
	I1212 01:33:17.656987  287206 addons.go:239] Setting addon storage-provisioner=true in "no-preload-361053"
	I1212 01:33:17.657033  287206 host.go:66] Checking if "no-preload-361053" exists ...
	I1212 01:33:17.657291  287206 addons.go:70] Setting dashboard=true in profile "no-preload-361053"
	I1212 01:33:17.657329  287206 addons.go:239] Setting addon dashboard=true in "no-preload-361053"
	W1212 01:33:17.657367  287206 addons.go:248] addon dashboard should already be in state true
	I1212 01:33:17.657411  287206 host.go:66] Checking if "no-preload-361053" exists ...
	I1212 01:33:17.658033  287206 cli_runner.go:164] Run: docker container inspect no-preload-361053 --format={{.State.Status}}
	I1212 01:33:17.658984  287206 addons.go:70] Setting default-storageclass=true in profile "no-preload-361053"
	I1212 01:33:17.659056  287206 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-361053"
	I1212 01:33:17.659163  287206 out.go:179] * Verifying Kubernetes components...
	I1212 01:33:17.659657  287206 cli_runner.go:164] Run: docker container inspect no-preload-361053 --format={{.State.Status}}
	I1212 01:33:17.659918  287206 cli_runner.go:164] Run: docker container inspect no-preload-361053 --format={{.State.Status}}
	I1212 01:33:17.663168  287206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:33:17.699556  287206 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:33:17.702548  287206 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:33:17.702568  287206 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 01:33:17.702633  287206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-361053
	I1212 01:33:17.707904  287206 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1212 01:33:17.712570  287206 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1212 01:33:17.715424  287206 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1212 01:33:17.715452  287206 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1212 01:33:17.715527  287206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-361053
	I1212 01:33:17.716395  287206 addons.go:239] Setting addon default-storageclass=true in "no-preload-361053"
	I1212 01:33:17.716432  287206 host.go:66] Checking if "no-preload-361053" exists ...
	I1212 01:33:17.716844  287206 cli_runner.go:164] Run: docker container inspect no-preload-361053 --format={{.State.Status}}
	I1212 01:33:17.757307  287206 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/no-preload-361053/id_rsa Username:docker}
	I1212 01:33:17.780041  287206 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 01:33:17.780062  287206 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 01:33:17.780201  287206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-361053
	I1212 01:33:17.787971  287206 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/no-preload-361053/id_rsa Username:docker}
	I1212 01:33:17.824270  287206 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/no-preload-361053/id_rsa Username:docker}
	I1212 01:33:17.914381  287206 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:33:17.932340  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:33:17.963955  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:33:17.997943  287206 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1212 01:33:17.997970  287206 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1212 01:33:18.029336  287206 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1212 01:33:18.029363  287206 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1212 01:33:18.049546  287206 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1212 01:33:18.049613  287206 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1212 01:33:18.063361  287206 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1212 01:33:18.063384  287206 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1212 01:33:18.077187  287206 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1212 01:33:18.077211  287206 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1212 01:33:18.090368  287206 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1212 01:33:18.090397  287206 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1212 01:33:18.104111  287206 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1212 01:33:18.104141  287206 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1212 01:33:18.117846  287206 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1212 01:33:18.117869  287206 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1212 01:33:18.130797  287206 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 01:33:18.130820  287206 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1212 01:33:18.144585  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 01:33:18.535208  287206 node_ready.go:35] waiting up to 6m0s for node "no-preload-361053" to be "Ready" ...
	W1212 01:33:18.535668  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:18.535741  287206 retry.go:31] will retry after 176.168279ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:33:18.535830  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:18.535866  287206 retry.go:31] will retry after 310.631399ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:33:18.536093  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:18.536119  287206 retry.go:31] will retry after 343.133583ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:18.712568  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:33:18.773707  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:18.773739  287206 retry.go:31] will retry after 503.490188ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:18.847154  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:33:18.879640  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:33:18.920064  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:18.920144  287206 retry.go:31] will retry after 545.970645ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:33:18.950800  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:18.950834  287206 retry.go:31] will retry after 319.954632ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:19.271042  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 01:33:19.278476  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:33:19.399940  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:19.399978  287206 retry.go:31] will retry after 290.065244ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:33:19.400038  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:19.400050  287206 retry.go:31] will retry after 299.213835ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:19.466369  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:33:19.524517  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:19.524549  287206 retry.go:31] will retry after 743.245184ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:19.690541  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 01:33:19.700168  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:33:19.757922  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:19.758015  287206 retry.go:31] will retry after 985.188119ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:33:19.779719  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:19.779761  287206 retry.go:31] will retry after 704.931485ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:20.267995  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:33:20.329699  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:20.329775  287206 retry.go:31] will retry after 765.58633ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:20.485196  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:33:20.536023  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:33:20.550357  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:20.550436  287206 retry.go:31] will retry after 1.819808593s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:20.743955  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:33:20.831697  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:20.831734  287206 retry.go:31] will retry after 930.762916ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:21.095851  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:33:21.157009  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:21.157042  287206 retry.go:31] will retry after 1.605590789s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:21.763111  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:33:21.825538  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:21.825574  287206 retry.go:31] will retry after 2.503052767s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:22.370497  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:33:22.431275  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:22.431307  287206 retry.go:31] will retry after 2.355012393s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:22.763437  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:33:22.850160  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:22.850194  287206 retry.go:31] will retry after 1.879850762s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:33:23.035858  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:33:24.329354  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:33:24.389132  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:24.389164  287206 retry.go:31] will retry after 2.014894624s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:24.731243  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:33:24.786964  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:33:24.789370  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:24.789397  287206 retry.go:31] will retry after 4.117004363s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:33:24.843221  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:24.843251  287206 retry.go:31] will retry after 1.752927223s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:33:25.535881  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:33:26.405127  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:33:26.464187  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:26.464220  287206 retry.go:31] will retry after 5.197320965s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:26.596983  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:33:26.656070  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:26.656104  287206 retry.go:31] will retry after 5.533382625s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:33:28.035833  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:33:28.907563  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:33:28.966861  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:28.966896  287206 retry.go:31] will retry after 5.418423295s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:33:30.036974  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:33:30.663739  276743 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001214099s
	I1212 01:33:30.663765  276743 kubeadm.go:319] 
	I1212 01:33:30.663824  276743 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 01:33:30.664225  276743 kubeadm.go:319] 	- The kubelet is not running
	I1212 01:33:30.664463  276743 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 01:33:30.664470  276743 kubeadm.go:319] 
	I1212 01:33:30.664859  276743 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 01:33:30.664924  276743 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 01:33:30.664997  276743 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 01:33:30.665002  276743 kubeadm.go:319] 
	I1212 01:33:30.670247  276743 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 01:33:30.670737  276743 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 01:33:30.670876  276743 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:33:30.671132  276743 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 01:33:30.671145  276743 kubeadm.go:319] 
	I1212 01:33:30.671240  276743 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 01:33:30.671353  276743 kubeadm.go:403] duration metric: took 8m6.695748826s to StartCluster
	I1212 01:33:30.671412  276743 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:33:30.671514  276743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:33:30.695843  276743 cri.go:89] found id: ""
	I1212 01:33:30.695865  276743 logs.go:282] 0 containers: []
	W1212 01:33:30.695874  276743 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:33:30.695882  276743 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:33:30.695947  276743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:33:30.721320  276743 cri.go:89] found id: ""
	I1212 01:33:30.721346  276743 logs.go:282] 0 containers: []
	W1212 01:33:30.721355  276743 logs.go:284] No container was found matching "etcd"
	I1212 01:33:30.721361  276743 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:33:30.721447  276743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:33:30.745399  276743 cri.go:89] found id: ""
	I1212 01:33:30.745432  276743 logs.go:282] 0 containers: []
	W1212 01:33:30.745441  276743 logs.go:284] No container was found matching "coredns"
	I1212 01:33:30.745447  276743 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:33:30.745544  276743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:33:30.770020  276743 cri.go:89] found id: ""
	I1212 01:33:30.770053  276743 logs.go:282] 0 containers: []
	W1212 01:33:30.770062  276743 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:33:30.770082  276743 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:33:30.770166  276743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:33:30.793304  276743 cri.go:89] found id: ""
	I1212 01:33:30.793329  276743 logs.go:282] 0 containers: []
	W1212 01:33:30.793338  276743 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:33:30.793344  276743 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:33:30.793405  276743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:33:30.821216  276743 cri.go:89] found id: ""
	I1212 01:33:30.821286  276743 logs.go:282] 0 containers: []
	W1212 01:33:30.821295  276743 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:33:30.821302  276743 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:33:30.821374  276743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:33:30.849092  276743 cri.go:89] found id: ""
	I1212 01:33:30.849118  276743 logs.go:282] 0 containers: []
	W1212 01:33:30.849127  276743 logs.go:284] No container was found matching "kindnet"
	I1212 01:33:30.849160  276743 logs.go:123] Gathering logs for kubelet ...
	I1212 01:33:30.849178  276743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:33:30.908511  276743 logs.go:123] Gathering logs for dmesg ...
	I1212 01:33:30.908546  276743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:33:30.921702  276743 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:33:30.921728  276743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:33:30.986459  276743 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:33:30.978227    4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:33:30.978917    4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:33:30.980428    4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:33:30.980954    4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:33:30.982546    4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:33:30.978227    4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:33:30.978917    4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:33:30.980428    4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:33:30.980954    4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:33:30.982546    4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:33:30.986482  276743 logs.go:123] Gathering logs for containerd ...
	I1212 01:33:30.986494  276743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:33:31.026654  276743 logs.go:123] Gathering logs for container status ...
	I1212 01:33:31.026689  276743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:33:31.065726  276743 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001214099s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 01:33:31.065772  276743 out.go:285] * 
	W1212 01:33:31.065854  276743 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001214099s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 01:33:31.065866  276743 out.go:285] * 
	W1212 01:33:31.067985  276743 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 01:33:31.073102  276743 out.go:203] 
	W1212 01:33:31.076901  276743 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001214099s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 01:33:31.076950  276743 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 01:33:31.076972  276743 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 01:33:31.079948  276743 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.884168277Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.884235388Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.884325300Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.884398827Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.884472510Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.884537996Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.884594398Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.884658768Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.884723680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.884808743Z" level=info msg="Connect containerd service"
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.885150498Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.885821438Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.897230715Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.897398782Z" level=info msg="Start subscribing containerd event"
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.897528104Z" level=info msg="Start recovering state"
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.897473433Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.936318243Z" level=info msg="Start event monitor"
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.936517777Z" level=info msg="Start cni network conf syncer for default"
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.936583329Z" level=info msg="Start streaming server"
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.936647608Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.936703617Z" level=info msg="runtime interface starting up..."
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.936753997Z" level=info msg="starting plugins..."
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.936827064Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.937025409Z" level=info msg="containerd successfully booted in 0.077505s"
	Dec 12 01:25:21 newest-cni-256959 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:33:32.159452    4930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:33:32.159997    4930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:33:32.161652    4930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:33:32.162014    4930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:33:32.163474    4930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec11 23:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014465] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.504479] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.038126] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.726220] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.947343] kauditd_printk_skb: 36 callbacks suppressed
	[Dec12 00:40] hrtimer: interrupt took 11339963 ns
	
	
	==> kernel <==
	 01:33:32 up  2:15,  0 user,  load average: 0.64, 1.03, 1.75
	Linux newest-cni-256959 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 01:33:28 newest-cni-256959 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:33:29 newest-cni-256959 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 12 01:33:29 newest-cni-256959 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:33:29 newest-cni-256959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:33:29 newest-cni-256959 kubelet[4732]: E1212 01:33:29.585171    4732 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:33:29 newest-cni-256959 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:33:29 newest-cni-256959 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:33:30 newest-cni-256959 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 12 01:33:30 newest-cni-256959 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:33:30 newest-cni-256959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:33:30 newest-cni-256959 kubelet[4738]: E1212 01:33:30.333341    4738 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:33:30 newest-cni-256959 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:33:30 newest-cni-256959 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:33:31 newest-cni-256959 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 12 01:33:31 newest-cni-256959 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:33:31 newest-cni-256959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:33:31 newest-cni-256959 kubelet[4817]: E1212 01:33:31.106539    4817 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:33:31 newest-cni-256959 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:33:31 newest-cni-256959 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:33:31 newest-cni-256959 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 12 01:33:31 newest-cni-256959 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:33:31 newest-cni-256959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:33:31 newest-cni-256959 kubelet[4852]: E1212 01:33:31.847506    4852 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:33:31 newest-cni-256959 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:33:31 newest-cni-256959 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-256959 -n newest-cni-256959
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-256959 -n newest-cni-256959: exit status 6 (344.159378ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 01:33:32.644235  289186 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-256959" does not appear in /home/jenkins/minikube-integration/22101-2343/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "newest-cni-256959" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (502.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (3.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-361053 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context no-preload-361053 create -f testdata/busybox.yaml: exit status 1 (63.968325ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-361053" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context no-preload-361053 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-361053
helpers_test.go:244: (dbg) docker inspect no-preload-361053:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd",
	        "Created": "2025-12-12T01:22:53.604240637Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 268910,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T01:22:53.788312247Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd/hostname",
	        "HostsPath": "/var/lib/docker/containers/68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd/hosts",
	        "LogPath": "/var/lib/docker/containers/68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd/68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd-json.log",
	        "Name": "/no-preload-361053",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-361053:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-361053",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd",
	                "LowerDir": "/var/lib/docker/overlay2/9169bb0f18d4483461fe8404c60d5cb61a51170a9dcf8e503c392f8c9d01abee-init/diff:/var/lib/docker/overlay2/4c0e5370e4fd7b4e6c6a79620ef377d7d55826709cd277e0cfa49c6005af0314/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9169bb0f18d4483461fe8404c60d5cb61a51170a9dcf8e503c392f8c9d01abee/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9169bb0f18d4483461fe8404c60d5cb61a51170a9dcf8e503c392f8c9d01abee/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9169bb0f18d4483461fe8404c60d5cb61a51170a9dcf8e503c392f8c9d01abee/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-361053",
	                "Source": "/var/lib/docker/volumes/no-preload-361053/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-361053",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-361053",
	                "name.minikube.sigs.k8s.io": "no-preload-361053",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6d73be6e1f66a3f7c6d96dca30aa8c1389affdac21224c7034e0e227db3e8397",
	            "SandboxKey": "/var/run/docker/netns/6d73be6e1f66",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-361053": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:21:58:59:ae:af",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ee086efedb5c3900c251cd31f9316499408470e70a7d486e64d8b91c6bf60cd7",
	                    "EndpointID": "ae778ff101bac87a43f1ea9fade85a6810900e2d9b74a07254c68fbc89db3f07",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-361053",
	                        "68256fe8de3b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-361053 -n no-preload-361053
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-361053 -n no-preload-361053: exit status 6 (312.734457ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 01:31:24.694480  284442 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-361053" does not appear in /home/jenkins/minikube-integration/22101-2343/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-361053 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ addons  │ enable dashboard -p default-k8s-diff-port-971096 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:21 UTC │ 12 Dec 25 01:21 UTC │
	│ start   │ -p default-k8s-diff-port-971096 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:21 UTC │ 12 Dec 25 01:22 UTC │
	│ image   │ old-k8s-version-147581 image list --format=json                                                                                                                                                                                                            │ old-k8s-version-147581       │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ pause   │ -p old-k8s-version-147581 --alsologtostderr -v=1                                                                                                                                                                                                           │ old-k8s-version-147581       │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ unpause │ -p old-k8s-version-147581 --alsologtostderr -v=1                                                                                                                                                                                                           │ old-k8s-version-147581       │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ delete  │ -p old-k8s-version-147581                                                                                                                                                                                                                                  │ old-k8s-version-147581       │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ delete  │ -p old-k8s-version-147581                                                                                                                                                                                                                                  │ old-k8s-version-147581       │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ start   │ -p embed-certs-648696 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:23 UTC │
	│ image   │ default-k8s-diff-port-971096 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ pause   │ -p default-k8s-diff-port-971096 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ unpause │ -p default-k8s-diff-port-971096 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ delete  │ -p default-k8s-diff-port-971096                                                                                                                                                                                                                            │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ delete  │ -p default-k8s-diff-port-971096                                                                                                                                                                                                                            │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ delete  │ -p disable-driver-mounts-539158                                                                                                                                                                                                                            │ disable-driver-mounts-539158 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ start   │ -p no-preload-361053 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-361053            │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-648696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ stop    │ -p embed-certs-648696 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ addons  │ enable dashboard -p embed-certs-648696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ start   │ -p embed-certs-648696 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:24 UTC │
	│ image   │ embed-certs-648696 image list --format=json                                                                                                                                                                                                                │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ pause   │ -p embed-certs-648696 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ unpause │ -p embed-certs-648696 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ delete  │ -p embed-certs-648696                                                                                                                                                                                                                                      │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ delete  │ -p embed-certs-648696                                                                                                                                                                                                                                      │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ start   │ -p newest-cni-256959 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-256959            │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 01:25:10
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 01:25:10.610326  276743 out.go:360] Setting OutFile to fd 1 ...
	I1212 01:25:10.611013  276743 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:25:10.611028  276743 out.go:374] Setting ErrFile to fd 2...
	I1212 01:25:10.611033  276743 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:25:10.611296  276743 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	I1212 01:25:10.611727  276743 out.go:368] Setting JSON to false
	I1212 01:25:10.612585  276743 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7657,"bootTime":1765495054,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1212 01:25:10.612655  276743 start.go:143] virtualization:  
	I1212 01:25:10.616537  276743 out.go:179] * [newest-cni-256959] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 01:25:10.620721  276743 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 01:25:10.620800  276743 notify.go:221] Checking for updates...
	I1212 01:25:10.627029  276743 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 01:25:10.630074  276743 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 01:25:10.633037  276743 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	I1212 01:25:10.635913  276743 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 01:25:10.638863  276743 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 01:25:10.642342  276743 config.go:182] Loaded profile config "no-preload-361053": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 01:25:10.642439  276743 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 01:25:10.663336  276743 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 01:25:10.663491  276743 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:25:10.731650  276743 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 01:25:10.720876374 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:25:10.731750  276743 docker.go:319] overlay module found
	I1212 01:25:10.734985  276743 out.go:179] * Using the docker driver based on user configuration
	I1212 01:25:10.738034  276743 start.go:309] selected driver: docker
	I1212 01:25:10.738050  276743 start.go:927] validating driver "docker" against <nil>
	I1212 01:25:10.738062  276743 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 01:25:10.738778  276743 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:25:10.802614  276743 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 01:25:10.791452052 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:25:10.802835  276743 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1212 01:25:10.802872  276743 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1212 01:25:10.803130  276743 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1212 01:25:10.806190  276743 out.go:179] * Using Docker driver with root privileges
	I1212 01:25:10.809707  276743 cni.go:84] Creating CNI manager for ""
	I1212 01:25:10.809779  276743 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 01:25:10.809793  276743 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 01:25:10.809867  276743 start.go:353] cluster config:
	{Name:newest-cni-256959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:25:10.813089  276743 out.go:179] * Starting "newest-cni-256959" primary control-plane node in "newest-cni-256959" cluster
	I1212 01:25:10.815885  276743 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1212 01:25:10.818744  276743 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 01:25:10.821553  276743 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 01:25:10.821612  276743 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1212 01:25:10.821626  276743 cache.go:65] Caching tarball of preloaded images
	I1212 01:25:10.821716  276743 preload.go:238] Found /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1212 01:25:10.821731  276743 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1212 01:25:10.821841  276743 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/config.json ...
	I1212 01:25:10.821864  276743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/config.json: {Name:mk4998d8ef384508a1b134495f81d7fc826b1990 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:25:10.822019  276743 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 01:25:10.842165  276743 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 01:25:10.842191  276743 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 01:25:10.842204  276743 cache.go:243] Successfully downloaded all kic artifacts
	I1212 01:25:10.842235  276743 start.go:360] acquireMachinesLock for newest-cni-256959: {Name:mke4c35c218ad59b1da2c46074b57e71134fc7be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:25:10.842335  276743 start.go:364] duration metric: took 80.822µs to acquireMachinesLock for "newest-cni-256959"
	I1212 01:25:10.842366  276743 start.go:93] Provisioning new machine with config: &{Name:newest-cni-256959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1212 01:25:10.842439  276743 start.go:125] createHost starting for "" (driver="docker")
	I1212 01:25:10.846610  276743 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1212 01:25:10.846848  276743 start.go:159] libmachine.API.Create for "newest-cni-256959" (driver="docker")
	I1212 01:25:10.846884  276743 client.go:173] LocalClient.Create starting
	I1212 01:25:10.846956  276743 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem
	I1212 01:25:10.847041  276743 main.go:143] libmachine: Decoding PEM data...
	I1212 01:25:10.847062  276743 main.go:143] libmachine: Parsing certificate...
	I1212 01:25:10.847105  276743 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem
	I1212 01:25:10.847126  276743 main.go:143] libmachine: Decoding PEM data...
	I1212 01:25:10.847142  276743 main.go:143] libmachine: Parsing certificate...
	I1212 01:25:10.847512  276743 cli_runner.go:164] Run: docker network inspect newest-cni-256959 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 01:25:10.866623  276743 cli_runner.go:211] docker network inspect newest-cni-256959 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 01:25:10.866711  276743 network_create.go:284] running [docker network inspect newest-cni-256959] to gather additional debugging logs...
	I1212 01:25:10.866732  276743 cli_runner.go:164] Run: docker network inspect newest-cni-256959
	W1212 01:25:10.882830  276743 cli_runner.go:211] docker network inspect newest-cni-256959 returned with exit code 1
	I1212 01:25:10.882862  276743 network_create.go:287] error running [docker network inspect newest-cni-256959]: docker network inspect newest-cni-256959: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-256959 not found
	I1212 01:25:10.882876  276743 network_create.go:289] output of [docker network inspect newest-cni-256959]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-256959 not found
	
	** /stderr **
	I1212 01:25:10.883058  276743 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 01:25:10.899622  276743 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4cd687b06342 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:a2:e8:c8:87:d3:0a} reservation:<nil>}
	I1212 01:25:10.899939  276743 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-c02c16721c9d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3e:e7:06:63:2c:e9} reservation:<nil>}
	I1212 01:25:10.900288  276743 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-805b07ff58c0 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:be:18:35:7a:03:02} reservation:<nil>}
	I1212 01:25:10.900688  276743 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a309a0}
	I1212 01:25:10.900712  276743 network_create.go:124] attempt to create docker network newest-cni-256959 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1212 01:25:10.900767  276743 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-256959 newest-cni-256959
	I1212 01:25:10.956774  276743 network_create.go:108] docker network newest-cni-256959 192.168.76.0/24 created
	I1212 01:25:10.956809  276743 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-256959" container
	I1212 01:25:10.956884  276743 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 01:25:10.973399  276743 cli_runner.go:164] Run: docker volume create newest-cni-256959 --label name.minikube.sigs.k8s.io=newest-cni-256959 --label created_by.minikube.sigs.k8s.io=true
	I1212 01:25:10.995879  276743 oci.go:103] Successfully created a docker volume newest-cni-256959
	I1212 01:25:10.995970  276743 cli_runner.go:164] Run: docker run --rm --name newest-cni-256959-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-256959 --entrypoint /usr/bin/test -v newest-cni-256959:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1212 01:25:11.527236  276743 oci.go:107] Successfully prepared a docker volume newest-cni-256959
	I1212 01:25:11.527311  276743 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 01:25:11.527325  276743 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 01:25:11.527417  276743 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-256959:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 01:25:15.366008  276743 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-256959:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.838540611s)
	I1212 01:25:15.366040  276743 kic.go:203] duration metric: took 3.838711624s to extract preloaded images to volume ...
	W1212 01:25:15.366202  276743 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1212 01:25:15.366316  276743 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 01:25:15.418308  276743 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-256959 --name newest-cni-256959 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-256959 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-256959 --network newest-cni-256959 --ip 192.168.76.2 --volume newest-cni-256959:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1212 01:25:15.701138  276743 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Running}}
	I1212 01:25:15.722831  276743 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Status}}
	I1212 01:25:15.744776  276743 cli_runner.go:164] Run: docker exec newest-cni-256959 stat /var/lib/dpkg/alternatives/iptables
	I1212 01:25:15.800337  276743 oci.go:144] the created container "newest-cni-256959" has a running status.
	I1212 01:25:15.800363  276743 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa...
	I1212 01:25:16.255229  276743 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 01:25:16.278394  276743 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Status}}
	I1212 01:25:16.296982  276743 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 01:25:16.297007  276743 kic_runner.go:114] Args: [docker exec --privileged newest-cni-256959 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 01:25:16.336579  276743 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Status}}
	I1212 01:25:16.356755  276743 machine.go:94] provisionDockerMachine start ...
	I1212 01:25:16.356843  276743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:25:16.374168  276743 main.go:143] libmachine: Using SSH client type: native
	I1212 01:25:16.374501  276743 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1212 01:25:16.374511  276743 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 01:25:16.375249  276743 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42232->127.0.0.1:33093: read: connection reset by peer
	I1212 01:25:19.530494  276743 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-256959
	
	I1212 01:25:19.530520  276743 ubuntu.go:182] provisioning hostname "newest-cni-256959"
	I1212 01:25:19.530584  276743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:25:19.548139  276743 main.go:143] libmachine: Using SSH client type: native
	I1212 01:25:19.548459  276743 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1212 01:25:19.548475  276743 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-256959 && echo "newest-cni-256959" | sudo tee /etc/hostname
	I1212 01:25:19.704022  276743 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-256959
	
	I1212 01:25:19.704112  276743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:25:19.721641  276743 main.go:143] libmachine: Using SSH client type: native
	I1212 01:25:19.721955  276743 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1212 01:25:19.721980  276743 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-256959' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-256959/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-256959' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:25:19.879218  276743 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:25:19.879312  276743 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-2343/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-2343/.minikube}
	I1212 01:25:19.879367  276743 ubuntu.go:190] setting up certificates
	I1212 01:25:19.879397  276743 provision.go:84] configureAuth start
	I1212 01:25:19.879518  276743 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-256959
	I1212 01:25:19.896152  276743 provision.go:143] copyHostCerts
	I1212 01:25:19.896221  276743 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem, removing ...
	I1212 01:25:19.896234  276743 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem
	I1212 01:25:19.896315  276743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem (1675 bytes)
	I1212 01:25:19.896434  276743 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem, removing ...
	I1212 01:25:19.896445  276743 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem
	I1212 01:25:19.896476  276743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem (1082 bytes)
	I1212 01:25:19.896542  276743 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem, removing ...
	I1212 01:25:19.896551  276743 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem
	I1212 01:25:19.896577  276743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem (1123 bytes)
	I1212 01:25:19.896641  276743 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem org=jenkins.newest-cni-256959 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-256959]
	I1212 01:25:20.204760  276743 provision.go:177] copyRemoteCerts
	I1212 01:25:20.204827  276743 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:25:20.204875  276743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:25:20.224116  276743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:25:20.330622  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 01:25:20.348480  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 01:25:20.366287  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 01:25:20.383425  276743 provision.go:87] duration metric: took 503.997002ms to configureAuth
	I1212 01:25:20.383450  276743 ubuntu.go:206] setting minikube options for container-runtime
	I1212 01:25:20.383651  276743 config.go:182] Loaded profile config "newest-cni-256959": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 01:25:20.383658  276743 machine.go:97] duration metric: took 4.026884265s to provisionDockerMachine
	I1212 01:25:20.383665  276743 client.go:176] duration metric: took 9.536770098s to LocalClient.Create
	I1212 01:25:20.383678  276743 start.go:167] duration metric: took 9.536832859s to libmachine.API.Create "newest-cni-256959"
	I1212 01:25:20.383685  276743 start.go:293] postStartSetup for "newest-cni-256959" (driver="docker")
	I1212 01:25:20.383694  276743 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:25:20.383742  276743 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:25:20.383784  276743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:25:20.400325  276743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:25:20.507208  276743 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:25:20.510550  276743 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 01:25:20.510580  276743 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 01:25:20.510595  276743 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/addons for local assets ...
	I1212 01:25:20.510649  276743 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/files for local assets ...
	I1212 01:25:20.510733  276743 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem -> 42902.pem in /etc/ssl/certs
	I1212 01:25:20.510839  276743 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:25:20.518172  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /etc/ssl/certs/42902.pem (1708 bytes)
	I1212 01:25:20.536052  276743 start.go:296] duration metric: took 152.353471ms for postStartSetup
	I1212 01:25:20.536438  276743 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-256959
	I1212 01:25:20.555757  276743 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/config.json ...
	I1212 01:25:20.556035  276743 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 01:25:20.556076  276743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:25:20.580490  276743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:25:20.688189  276743 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 01:25:20.693057  276743 start.go:128] duration metric: took 9.850603168s to createHost
	I1212 01:25:20.693084  276743 start.go:83] releasing machines lock for "newest-cni-256959", held for 9.850734377s
	I1212 01:25:20.693172  276743 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-256959
	I1212 01:25:20.709859  276743 ssh_runner.go:195] Run: cat /version.json
	I1212 01:25:20.709914  276743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:25:20.710177  276743 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:25:20.710239  276743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:25:20.729457  276743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:25:20.741797  276743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:25:20.834710  276743 ssh_runner.go:195] Run: systemctl --version
	I1212 01:25:20.924313  276743 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:25:20.928847  276743 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:25:20.928951  276743 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:25:20.954175  276743 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1212 01:25:20.954200  276743 start.go:496] detecting cgroup driver to use...
	I1212 01:25:20.954231  276743 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 01:25:20.954281  276743 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 01:25:20.969656  276743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 01:25:20.982642  276743 docker.go:218] disabling cri-docker service (if available) ...
	I1212 01:25:20.982710  276743 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:25:20.999922  276743 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:25:21.020615  276743 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:25:21.141838  276743 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:25:21.263080  276743 docker.go:234] disabling docker service ...
	I1212 01:25:21.263148  276743 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:25:21.287246  276743 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:25:21.310750  276743 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:25:21.444187  276743 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:25:21.567374  276743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:25:21.580397  276743 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:25:21.594203  276743 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 01:25:21.603451  276743 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 01:25:21.612481  276743 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 01:25:21.612614  276743 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 01:25:21.621568  276743 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 01:25:21.630132  276743 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 01:25:21.639772  276743 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 01:25:21.648550  276743 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:25:21.656926  276743 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 01:25:21.666027  276743 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 01:25:21.675421  276743 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 01:25:21.684275  276743 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:25:21.692101  276743 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:25:21.699082  276743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:25:21.804978  276743 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 01:25:21.939895  276743 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1212 01:25:21.939976  276743 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1212 01:25:21.944016  276743 start.go:564] Will wait 60s for crictl version
	I1212 01:25:21.944158  276743 ssh_runner.go:195] Run: which crictl
	I1212 01:25:21.947592  276743 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 01:25:21.970388  276743 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1212 01:25:21.970506  276743 ssh_runner.go:195] Run: containerd --version
	I1212 01:25:21.989928  276743 ssh_runner.go:195] Run: containerd --version
	I1212 01:25:22.016197  276743 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1212 01:25:22.019317  276743 cli_runner.go:164] Run: docker network inspect newest-cni-256959 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 01:25:22.042635  276743 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1212 01:25:22.047510  276743 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:25:22.062564  276743 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1212 01:25:22.065397  276743 kubeadm.go:884] updating cluster {Name:newest-cni-256959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:25:22.065551  276743 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 01:25:22.065640  276743 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:25:22.102156  276743 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 01:25:22.102183  276743 containerd.go:534] Images already preloaded, skipping extraction
	I1212 01:25:22.102250  276743 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:25:22.129883  276743 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 01:25:22.129908  276743 cache_images.go:86] Images are preloaded, skipping loading
	I1212 01:25:22.129916  276743 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1212 01:25:22.130003  276743 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-256959 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:25:22.130072  276743 ssh_runner.go:195] Run: sudo crictl info
	I1212 01:25:22.157382  276743 cni.go:84] Creating CNI manager for ""
	I1212 01:25:22.157407  276743 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 01:25:22.157422  276743 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1212 01:25:22.157449  276743 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-256959 NodeName:newest-cni-256959 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 01:25:22.157566  276743 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-256959"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:25:22.157640  276743 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 01:25:22.165592  276743 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 01:25:22.165664  276743 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:25:22.173544  276743 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1212 01:25:22.186913  276743 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 01:25:22.199980  276743 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1212 01:25:22.212497  276743 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1212 01:25:22.216212  276743 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:25:22.226129  276743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:25:22.341565  276743 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:25:22.362735  276743 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959 for IP: 192.168.76.2
	I1212 01:25:22.362758  276743 certs.go:195] generating shared ca certs ...
	I1212 01:25:22.362774  276743 certs.go:227] acquiring lock for ca certs: {Name:mk18ed2fce74cbc4ee01c0f71e2dbdd98ccce1cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:25:22.362922  276743 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key
	I1212 01:25:22.362982  276743 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key
	I1212 01:25:22.363063  276743 certs.go:257] generating profile certs ...
	I1212 01:25:22.363128  276743 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/client.key
	I1212 01:25:22.363145  276743 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/client.crt with IP's: []
	I1212 01:25:23.043220  276743 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/client.crt ...
	I1212 01:25:23.043305  276743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/client.crt: {Name:mke800b4895a7f26c3f61118ac2a9636e3a9248a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:25:23.043557  276743 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/client.key ...
	I1212 01:25:23.043596  276743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/client.key: {Name:mkb2206776a08341de5b9d37086d859f3539aa54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:25:23.043743  276743 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.key.b05ecb93
	I1212 01:25:23.043783  276743 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.crt.b05ecb93 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1212 01:25:23.163980  276743 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.crt.b05ecb93 ...
	I1212 01:25:23.164017  276743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.crt.b05ecb93: {Name:mk05b9dd6b8930af6580fe78d40e6026f3e8847a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:25:23.164237  276743 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.key.b05ecb93 ...
	I1212 01:25:23.164254  276743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.key.b05ecb93: {Name:mk5e2ac6bbc37c39d5b319f8600a5d25e63c4a12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:25:23.164355  276743 certs.go:382] copying /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.crt.b05ecb93 -> /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.crt
	I1212 01:25:23.164449  276743 certs.go:386] copying /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.key.b05ecb93 -> /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.key
	I1212 01:25:23.164518  276743 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.key
	I1212 01:25:23.164541  276743 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.crt with IP's: []
	I1212 01:25:23.503416  276743 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.crt ...
	I1212 01:25:23.503453  276743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.crt: {Name:mka5a6a7cee07eb7c969d496d8aa380d667ba867 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:25:23.503635  276743 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.key ...
	I1212 01:25:23.503652  276743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.key: {Name:mkd0b1a9e86a7f90668157e83a73d06f56064ece Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:25:23.503848  276743 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem (1338 bytes)
	W1212 01:25:23.503900  276743 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290_empty.pem, impossibly tiny 0 bytes
	I1212 01:25:23.503913  276743 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 01:25:23.503965  276743 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem (1082 bytes)
	I1212 01:25:23.503999  276743 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:25:23.504032  276743 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem (1675 bytes)
	I1212 01:25:23.504080  276743 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem (1708 bytes)
	I1212 01:25:23.504696  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:25:23.524669  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 01:25:23.551586  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:25:23.572683  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:25:23.594234  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 01:25:23.612544  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 01:25:23.629869  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:25:23.647023  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 01:25:23.664698  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /usr/share/ca-certificates/42902.pem (1708 bytes)
	I1212 01:25:23.682123  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:25:23.699502  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem --> /usr/share/ca-certificates/4290.pem (1338 bytes)
	I1212 01:25:23.716689  276743 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:25:23.729602  276743 ssh_runner.go:195] Run: openssl version
	I1212 01:25:23.735703  276743 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4290.pem
	I1212 01:25:23.742851  276743 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4290.pem /etc/ssl/certs/4290.pem
	I1212 01:25:23.750077  276743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4290.pem
	I1212 01:25:23.753690  276743 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:06 /usr/share/ca-certificates/4290.pem
	I1212 01:25:23.753758  276743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4290.pem
	I1212 01:25:23.795206  276743 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 01:25:23.802639  276743 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4290.pem /etc/ssl/certs/51391683.0
	I1212 01:25:23.809951  276743 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42902.pem
	I1212 01:25:23.817139  276743 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42902.pem /etc/ssl/certs/42902.pem
	I1212 01:25:23.830841  276743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42902.pem
	I1212 01:25:23.834926  276743 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:06 /usr/share/ca-certificates/42902.pem
	I1212 01:25:23.835070  276743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42902.pem
	I1212 01:25:23.876422  276743 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 01:25:23.885502  276743 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/42902.pem /etc/ssl/certs/3ec20f2e.0
	I1212 01:25:23.893037  276743 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:25:23.900652  276743 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 01:25:23.908635  276743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:25:23.912614  276743 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:25:23.912690  276743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:25:23.956401  276743 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 01:25:23.964299  276743 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 01:25:23.971681  276743 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:25:23.975558  276743 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 01:25:23.975609  276743 kubeadm.go:401] StartCluster: {Name:newest-cni-256959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:25:23.975697  276743 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1212 01:25:23.975759  276743 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:25:24.003976  276743 cri.go:89] found id: ""
	I1212 01:25:24.004073  276743 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:25:24.014227  276743 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:25:24.022866  276743 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 01:25:24.022958  276743 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:25:24.031328  276743 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:25:24.031359  276743 kubeadm.go:158] found existing configuration files:
	
	I1212 01:25:24.031425  276743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:25:24.039632  276743 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:25:24.039710  276743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:25:24.047426  276743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:25:24.055269  276743 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:25:24.055386  276743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:25:24.062906  276743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:25:24.070757  276743 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:25:24.070846  276743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:25:24.078322  276743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:25:24.086235  276743 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:25:24.086340  276743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:25:24.093978  276743 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 01:25:24.130495  276743 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 01:25:24.130556  276743 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 01:25:24.204494  276743 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 01:25:24.204576  276743 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 01:25:24.204617  276743 kubeadm.go:319] OS: Linux
	I1212 01:25:24.204667  276743 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 01:25:24.204719  276743 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 01:25:24.204770  276743 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 01:25:24.204821  276743 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 01:25:24.204871  276743 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 01:25:24.204928  276743 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 01:25:24.204978  276743 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 01:25:24.205039  276743 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 01:25:24.205089  276743 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 01:25:24.274059  276743 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:25:24.274248  276743 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:25:24.274393  276743 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:25:24.281432  276743 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:25:24.288265  276743 out.go:252]   - Generating certificates and keys ...
	I1212 01:25:24.288438  276743 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 01:25:24.288544  276743 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 01:25:24.872395  276743 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 01:25:24.948048  276743 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 01:25:25.302518  276743 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 01:25:25.648856  276743 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 01:25:25.789938  276743 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 01:25:25.790397  276743 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-256959] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1212 01:25:26.099340  276743 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 01:25:26.099559  276743 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-256959] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1212 01:25:26.538607  276743 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 01:25:27.389042  276743 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 01:25:27.842473  276743 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 01:25:27.842877  276743 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:25:27.936371  276743 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:25:28.210661  276743 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 01:25:28.314836  276743 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:25:28.428208  276743 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:25:28.580595  276743 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:25:28.581418  276743 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:25:28.584199  276743 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:25:28.587820  276743 out.go:252]   - Booting up control plane ...
	I1212 01:25:28.587929  276743 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:25:28.588012  276743 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:25:28.589356  276743 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:25:28.605527  276743 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:25:28.605678  276743 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 01:25:28.613455  276743 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 01:25:28.614074  276743 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:25:28.614240  276743 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 01:25:28.755452  276743 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 01:25:28.755580  276743 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 01:27:19.226422  268396 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000962088s
	I1212 01:27:19.226635  268396 kubeadm.go:319] 
	I1212 01:27:19.226702  268396 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 01:27:19.226735  268396 kubeadm.go:319] 	- The kubelet is not running
	I1212 01:27:19.226840  268396 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 01:27:19.226847  268396 kubeadm.go:319] 
	I1212 01:27:19.227012  268396 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 01:27:19.227062  268396 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 01:27:19.227095  268396 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 01:27:19.227099  268396 kubeadm.go:319] 
	I1212 01:27:19.231490  268396 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 01:27:19.231948  268396 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 01:27:19.232070  268396 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:27:19.232304  268396 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 01:27:19.232318  268396 kubeadm.go:319] 
	W1212 01:27:19.232506  268396 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-361053] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-361053] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000962088s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 01:27:19.232600  268396 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1212 01:27:19.232891  268396 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 01:27:19.641819  268396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:27:19.655717  268396 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 01:27:19.655786  268396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:27:19.664059  268396 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:27:19.664079  268396 kubeadm.go:158] found existing configuration files:
	
	I1212 01:27:19.664128  268396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:27:19.672510  268396 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:27:19.672575  268396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:27:19.680342  268396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:27:19.688315  268396 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:27:19.688383  268396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:27:19.696209  268396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:27:19.704155  268396 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:27:19.704219  268396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:27:19.711899  268396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:27:19.719844  268396 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:27:19.719910  268396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:27:19.727687  268396 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 01:27:19.860959  268396 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 01:27:19.861382  268396 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 01:27:19.927748  268396 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:29:28.751512  276743 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000466498s
	I1212 01:29:28.751546  276743 kubeadm.go:319] 
	I1212 01:29:28.751605  276743 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 01:29:28.751644  276743 kubeadm.go:319] 	- The kubelet is not running
	I1212 01:29:28.751765  276743 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 01:29:28.751774  276743 kubeadm.go:319] 
	I1212 01:29:28.751883  276743 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 01:29:28.751919  276743 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 01:29:28.751963  276743 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 01:29:28.751972  276743 kubeadm.go:319] 
	I1212 01:29:28.757988  276743 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 01:29:28.758457  276743 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 01:29:28.758593  276743 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:29:28.759136  276743 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 01:29:28.759148  276743 kubeadm.go:319] 
	I1212 01:29:28.759296  276743 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1212 01:29:28.759448  276743 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-256959] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-256959] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000466498s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 01:29:28.759537  276743 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1212 01:29:29.171145  276743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:29:29.184061  276743 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 01:29:29.184150  276743 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:29:29.191792  276743 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:29:29.191813  276743 kubeadm.go:158] found existing configuration files:
	
	I1212 01:29:29.191872  276743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:29:29.199430  276743 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:29:29.199502  276743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:29:29.206493  276743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:29:29.213869  276743 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:29:29.213974  276743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:29:29.221146  276743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:29:29.228771  276743 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:29:29.228848  276743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:29:29.236019  276743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:29:29.243394  276743 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:29:29.243513  276743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:29:29.250760  276743 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 01:29:29.289424  276743 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 01:29:29.289525  276743 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 01:29:29.367460  276743 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 01:29:29.367532  276743 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 01:29:29.367572  276743 kubeadm.go:319] OS: Linux
	I1212 01:29:29.367620  276743 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 01:29:29.367668  276743 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 01:29:29.367716  276743 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 01:29:29.367765  276743 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 01:29:29.367814  276743 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 01:29:29.367862  276743 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 01:29:29.367907  276743 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 01:29:29.367956  276743 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 01:29:29.368003  276743 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 01:29:29.435977  276743 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:29:29.436136  276743 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:29:29.436234  276743 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:29:29.447414  276743 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:29:29.452711  276743 out.go:252]   - Generating certificates and keys ...
	I1212 01:29:29.452896  276743 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 01:29:29.452999  276743 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 01:29:29.453121  276743 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:29:29.453232  276743 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:29:29.453362  276743 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:29:29.453468  276743 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 01:29:29.453582  276743 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:29:29.453693  276743 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:29:29.453811  276743 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:29:29.453920  276743 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:29:29.453981  276743 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 01:29:29.454074  276743 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:29:29.661293  276743 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:29:29.926167  276743 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 01:29:30.228322  276743 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:29:30.325953  276743 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:29:30.468055  276743 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:29:30.469327  276743 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:29:30.473394  276743 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:29:30.478856  276743 out.go:252]   - Booting up control plane ...
	I1212 01:29:30.478958  276743 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:29:30.479046  276743 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:29:30.479115  276743 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:29:30.498715  276743 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:29:30.498819  276743 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 01:29:30.506278  276743 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 01:29:30.506595  276743 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:29:30.506638  276743 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 01:29:30.667439  276743 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 01:29:30.667560  276743 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 01:31:22.255083  268396 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 01:31:22.255118  268396 kubeadm.go:319] 
	I1212 01:31:22.255185  268396 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 01:31:22.259224  268396 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 01:31:22.259291  268396 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 01:31:22.259384  268396 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 01:31:22.259445  268396 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 01:31:22.259485  268396 kubeadm.go:319] OS: Linux
	I1212 01:31:22.259534  268396 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 01:31:22.259586  268396 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 01:31:22.259638  268396 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 01:31:22.259689  268396 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 01:31:22.259742  268396 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 01:31:22.259793  268396 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 01:31:22.259842  268396 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 01:31:22.259894  268396 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 01:31:22.259943  268396 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 01:31:22.260016  268396 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:31:22.260113  268396 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:31:22.260208  268396 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:31:22.260274  268396 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:31:22.264965  268396 out.go:252]   - Generating certificates and keys ...
	I1212 01:31:22.265061  268396 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 01:31:22.265129  268396 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 01:31:22.265205  268396 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:31:22.265267  268396 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:31:22.265335  268396 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:31:22.265389  268396 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 01:31:22.265452  268396 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:31:22.265511  268396 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:31:22.265581  268396 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:31:22.265657  268396 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:31:22.265698  268396 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 01:31:22.265754  268396 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:31:22.265805  268396 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:31:22.265863  268396 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 01:31:22.265922  268396 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:31:22.265985  268396 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:31:22.266040  268396 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:31:22.266122  268396 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:31:22.266188  268396 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:31:22.269011  268396 out.go:252]   - Booting up control plane ...
	I1212 01:31:22.269113  268396 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:31:22.269196  268396 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:31:22.269313  268396 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:31:22.269458  268396 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:31:22.269587  268396 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 01:31:22.269697  268396 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 01:31:22.269820  268396 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:31:22.269866  268396 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 01:31:22.270050  268396 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 01:31:22.270170  268396 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 01:31:22.270256  268396 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001218388s
	I1212 01:31:22.270267  268396 kubeadm.go:319] 
	I1212 01:31:22.270326  268396 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 01:31:22.270369  268396 kubeadm.go:319] 	- The kubelet is not running
	I1212 01:31:22.270483  268396 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 01:31:22.270503  268396 kubeadm.go:319] 
	I1212 01:31:22.270616  268396 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 01:31:22.270657  268396 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 01:31:22.270717  268396 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 01:31:22.270757  268396 kubeadm.go:319] 
	I1212 01:31:22.270858  268396 kubeadm.go:403] duration metric: took 8m7.867624823s to StartCluster
	I1212 01:31:22.270898  268396 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:31:22.270968  268396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:31:22.306968  268396 cri.go:89] found id: ""
	I1212 01:31:22.307036  268396 logs.go:282] 0 containers: []
	W1212 01:31:22.307047  268396 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:31:22.307054  268396 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:31:22.307137  268396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:31:22.339653  268396 cri.go:89] found id: ""
	I1212 01:31:22.339689  268396 logs.go:282] 0 containers: []
	W1212 01:31:22.339700  268396 logs.go:284] No container was found matching "etcd"
	I1212 01:31:22.339706  268396 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:31:22.339765  268396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:31:22.368586  268396 cri.go:89] found id: ""
	I1212 01:31:22.368607  268396 logs.go:282] 0 containers: []
	W1212 01:31:22.368615  268396 logs.go:284] No container was found matching "coredns"
	I1212 01:31:22.368621  268396 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:31:22.368680  268396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:31:22.393839  268396 cri.go:89] found id: ""
	I1212 01:31:22.393912  268396 logs.go:282] 0 containers: []
	W1212 01:31:22.393934  268396 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:31:22.393960  268396 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:31:22.394035  268396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:31:22.419583  268396 cri.go:89] found id: ""
	I1212 01:31:22.419608  268396 logs.go:282] 0 containers: []
	W1212 01:31:22.419616  268396 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:31:22.419622  268396 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:31:22.419680  268396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:31:22.448415  268396 cri.go:89] found id: ""
	I1212 01:31:22.448443  268396 logs.go:282] 0 containers: []
	W1212 01:31:22.448451  268396 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:31:22.448459  268396 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:31:22.448517  268396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:31:22.476913  268396 cri.go:89] found id: ""
	I1212 01:31:22.476939  268396 logs.go:282] 0 containers: []
	W1212 01:31:22.476947  268396 logs.go:284] No container was found matching "kindnet"
	I1212 01:31:22.476956  268396 logs.go:123] Gathering logs for kubelet ...
	I1212 01:31:22.476983  268396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:31:22.533409  268396 logs.go:123] Gathering logs for dmesg ...
	I1212 01:31:22.533444  268396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:31:22.548368  268396 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:31:22.548401  268396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:31:22.614148  268396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:31:22.605942    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:22.606490    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:22.608232    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:22.608633    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:22.610124    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:31:22.605942    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:22.606490    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:22.608232    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:22.608633    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:22.610124    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:31:22.614173  268396 logs.go:123] Gathering logs for containerd ...
	I1212 01:31:22.614185  268396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:31:22.656511  268396 logs.go:123] Gathering logs for container status ...
	I1212 01:31:22.656543  268396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:31:22.687238  268396 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001218388s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 01:31:22.687348  268396 out.go:285] * 
	W1212 01:31:22.687426  268396 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001218388s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 01:31:22.687441  268396 out.go:285] * 
	W1212 01:31:22.689841  268396 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 01:31:22.695133  268396 out.go:203] 
	W1212 01:31:22.698069  268396 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001218388s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 01:31:22.698114  268396 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 01:31:22.698136  268396 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 01:31:22.701165  268396 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 12 01:23:04 no-preload-361053 containerd[760]: time="2025-12-12T01:23:04.384035480Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:05 no-preload-361053 containerd[760]: time="2025-12-12T01:23:05.373522512Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 12 01:23:05 no-preload-361053 containerd[760]: time="2025-12-12T01:23:05.375857443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 12 01:23:05 no-preload-361053 containerd[760]: time="2025-12-12T01:23:05.392808161Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:05 no-preload-361053 containerd[760]: time="2025-12-12T01:23:05.393494461Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:06 no-preload-361053 containerd[760]: time="2025-12-12T01:23:06.469307146Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 12 01:23:06 no-preload-361053 containerd[760]: time="2025-12-12T01:23:06.471770579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 12 01:23:06 no-preload-361053 containerd[760]: time="2025-12-12T01:23:06.487399898Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:06 no-preload-361053 containerd[760]: time="2025-12-12T01:23:06.488294224Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:07 no-preload-361053 containerd[760]: time="2025-12-12T01:23:07.584315785Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 12 01:23:07 no-preload-361053 containerd[760]: time="2025-12-12T01:23:07.586361959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 12 01:23:07 no-preload-361053 containerd[760]: time="2025-12-12T01:23:07.593980543Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:07 no-preload-361053 containerd[760]: time="2025-12-12T01:23:07.594678428Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:09 no-preload-361053 containerd[760]: time="2025-12-12T01:23:09.125818664Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 12 01:23:09 no-preload-361053 containerd[760]: time="2025-12-12T01:23:09.128180286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 12 01:23:09 no-preload-361053 containerd[760]: time="2025-12-12T01:23:09.138535463Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:09 no-preload-361053 containerd[760]: time="2025-12-12T01:23:09.139822900Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:10 no-preload-361053 containerd[760]: time="2025-12-12T01:23:10.221236720Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 12 01:23:10 no-preload-361053 containerd[760]: time="2025-12-12T01:23:10.223326176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 12 01:23:10 no-preload-361053 containerd[760]: time="2025-12-12T01:23:10.237305395Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:10 no-preload-361053 containerd[760]: time="2025-12-12T01:23:10.238695242Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:10 no-preload-361053 containerd[760]: time="2025-12-12T01:23:10.594370471Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 12 01:23:10 no-preload-361053 containerd[760]: time="2025-12-12T01:23:10.596617443Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 12 01:23:10 no-preload-361053 containerd[760]: time="2025-12-12T01:23:10.603750918Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:10 no-preload-361053 containerd[760]: time="2025-12-12T01:23:10.604059262Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:31:25.373878    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:25.374535    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:25.376159    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:25.376603    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:25.378222    5700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec11 23:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014465] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.504479] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.038126] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.726220] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.947343] kauditd_printk_skb: 36 callbacks suppressed
	[Dec12 00:40] hrtimer: interrupt took 11339963 ns
	
	
	==> kernel <==
	 01:31:25 up  2:13,  0 user,  load average: 0.37, 1.22, 1.92
	Linux no-preload-361053 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 01:31:22 no-preload-361053 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:31:23 no-preload-361053 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 12 01:31:23 no-preload-361053 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:31:23 no-preload-361053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:31:23 no-preload-361053 kubelet[5469]: E1212 01:31:23.081525    5469 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:31:23 no-preload-361053 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:31:23 no-preload-361053 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:31:23 no-preload-361053 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 12 01:31:23 no-preload-361053 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:31:23 no-preload-361053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:31:23 no-preload-361053 kubelet[5567]: E1212 01:31:23.853022    5567 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:31:23 no-preload-361053 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:31:23 no-preload-361053 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:31:24 no-preload-361053 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 12 01:31:24 no-preload-361053 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:31:24 no-preload-361053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:31:24 no-preload-361053 kubelet[5599]: E1212 01:31:24.586362    5599 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:31:24 no-preload-361053 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:31:24 no-preload-361053 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:31:25 no-preload-361053 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 12 01:31:25 no-preload-361053 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:31:25 no-preload-361053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:31:25 no-preload-361053 kubelet[5691]: E1212 01:31:25.341040    5691 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:31:25 no-preload-361053 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:31:25 no-preload-361053 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-361053 -n no-preload-361053
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-361053 -n no-preload-361053: exit status 6 (346.712653ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 01:31:25.838144  284667 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-361053" does not appear in /home/jenkins/minikube-integration/22101-2343/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-361053" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-361053
helpers_test.go:244: (dbg) docker inspect no-preload-361053:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd",
	        "Created": "2025-12-12T01:22:53.604240637Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 268910,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T01:22:53.788312247Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd/hostname",
	        "HostsPath": "/var/lib/docker/containers/68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd/hosts",
	        "LogPath": "/var/lib/docker/containers/68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd/68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd-json.log",
	        "Name": "/no-preload-361053",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-361053:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-361053",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd",
	                "LowerDir": "/var/lib/docker/overlay2/9169bb0f18d4483461fe8404c60d5cb61a51170a9dcf8e503c392f8c9d01abee-init/diff:/var/lib/docker/overlay2/4c0e5370e4fd7b4e6c6a79620ef377d7d55826709cd277e0cfa49c6005af0314/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9169bb0f18d4483461fe8404c60d5cb61a51170a9dcf8e503c392f8c9d01abee/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9169bb0f18d4483461fe8404c60d5cb61a51170a9dcf8e503c392f8c9d01abee/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9169bb0f18d4483461fe8404c60d5cb61a51170a9dcf8e503c392f8c9d01abee/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-361053",
	                "Source": "/var/lib/docker/volumes/no-preload-361053/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-361053",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-361053",
	                "name.minikube.sigs.k8s.io": "no-preload-361053",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6d73be6e1f66a3f7c6d96dca30aa8c1389affdac21224c7034e0e227db3e8397",
	            "SandboxKey": "/var/run/docker/netns/6d73be6e1f66",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-361053": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:21:58:59:ae:af",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ee086efedb5c3900c251cd31f9316499408470e70a7d486e64d8b91c6bf60cd7",
	                    "EndpointID": "ae778ff101bac87a43f1ea9fade85a6810900e2d9b74a07254c68fbc89db3f07",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-361053",
	                        "68256fe8de3b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-361053 -n no-preload-361053
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-361053 -n no-preload-361053: exit status 6 (314.323757ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 01:31:26.180833  284749 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-361053" does not appear in /home/jenkins/minikube-integration/22101-2343/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-361053 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ addons  │ enable dashboard -p default-k8s-diff-port-971096 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:21 UTC │ 12 Dec 25 01:21 UTC │
	│ start   │ -p default-k8s-diff-port-971096 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:21 UTC │ 12 Dec 25 01:22 UTC │
	│ image   │ old-k8s-version-147581 image list --format=json                                                                                                                                                                                                            │ old-k8s-version-147581       │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ pause   │ -p old-k8s-version-147581 --alsologtostderr -v=1                                                                                                                                                                                                           │ old-k8s-version-147581       │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ unpause │ -p old-k8s-version-147581 --alsologtostderr -v=1                                                                                                                                                                                                           │ old-k8s-version-147581       │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ delete  │ -p old-k8s-version-147581                                                                                                                                                                                                                                  │ old-k8s-version-147581       │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ delete  │ -p old-k8s-version-147581                                                                                                                                                                                                                                  │ old-k8s-version-147581       │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ start   │ -p embed-certs-648696 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:23 UTC │
	│ image   │ default-k8s-diff-port-971096 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ pause   │ -p default-k8s-diff-port-971096 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ unpause │ -p default-k8s-diff-port-971096 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ delete  │ -p default-k8s-diff-port-971096                                                                                                                                                                                                                            │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ delete  │ -p default-k8s-diff-port-971096                                                                                                                                                                                                                            │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ delete  │ -p disable-driver-mounts-539158                                                                                                                                                                                                                            │ disable-driver-mounts-539158 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ start   │ -p no-preload-361053 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-361053            │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-648696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ stop    │ -p embed-certs-648696 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ addons  │ enable dashboard -p embed-certs-648696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ start   │ -p embed-certs-648696 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:24 UTC │
	│ image   │ embed-certs-648696 image list --format=json                                                                                                                                                                                                                │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ pause   │ -p embed-certs-648696 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ unpause │ -p embed-certs-648696 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ delete  │ -p embed-certs-648696                                                                                                                                                                                                                                      │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ delete  │ -p embed-certs-648696                                                                                                                                                                                                                                      │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ start   │ -p newest-cni-256959 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-256959            │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 01:25:10
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 01:25:10.610326  276743 out.go:360] Setting OutFile to fd 1 ...
	I1212 01:25:10.611013  276743 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:25:10.611028  276743 out.go:374] Setting ErrFile to fd 2...
	I1212 01:25:10.611033  276743 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:25:10.611296  276743 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	I1212 01:25:10.611727  276743 out.go:368] Setting JSON to false
	I1212 01:25:10.612585  276743 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7657,"bootTime":1765495054,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1212 01:25:10.612655  276743 start.go:143] virtualization:  
	I1212 01:25:10.616537  276743 out.go:179] * [newest-cni-256959] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 01:25:10.620721  276743 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 01:25:10.620800  276743 notify.go:221] Checking for updates...
	I1212 01:25:10.627029  276743 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 01:25:10.630074  276743 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 01:25:10.633037  276743 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	I1212 01:25:10.635913  276743 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 01:25:10.638863  276743 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 01:25:10.642342  276743 config.go:182] Loaded profile config "no-preload-361053": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 01:25:10.642439  276743 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 01:25:10.663336  276743 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 01:25:10.663491  276743 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:25:10.731650  276743 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 01:25:10.720876374 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:25:10.731750  276743 docker.go:319] overlay module found
	I1212 01:25:10.734985  276743 out.go:179] * Using the docker driver based on user configuration
	I1212 01:25:10.738034  276743 start.go:309] selected driver: docker
	I1212 01:25:10.738050  276743 start.go:927] validating driver "docker" against <nil>
	I1212 01:25:10.738062  276743 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 01:25:10.738778  276743 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:25:10.802614  276743 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 01:25:10.791452052 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:25:10.802835  276743 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1212 01:25:10.802872  276743 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1212 01:25:10.803130  276743 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1212 01:25:10.806190  276743 out.go:179] * Using Docker driver with root privileges
	I1212 01:25:10.809707  276743 cni.go:84] Creating CNI manager for ""
	I1212 01:25:10.809779  276743 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 01:25:10.809793  276743 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 01:25:10.809867  276743 start.go:353] cluster config:
	{Name:newest-cni-256959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:25:10.813089  276743 out.go:179] * Starting "newest-cni-256959" primary control-plane node in "newest-cni-256959" cluster
	I1212 01:25:10.815885  276743 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1212 01:25:10.818744  276743 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 01:25:10.821553  276743 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 01:25:10.821612  276743 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1212 01:25:10.821626  276743 cache.go:65] Caching tarball of preloaded images
	I1212 01:25:10.821716  276743 preload.go:238] Found /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1212 01:25:10.821731  276743 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1212 01:25:10.821841  276743 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/config.json ...
	I1212 01:25:10.821864  276743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/config.json: {Name:mk4998d8ef384508a1b134495f81d7fc826b1990 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:25:10.822019  276743 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 01:25:10.842165  276743 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 01:25:10.842191  276743 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 01:25:10.842204  276743 cache.go:243] Successfully downloaded all kic artifacts
	I1212 01:25:10.842235  276743 start.go:360] acquireMachinesLock for newest-cni-256959: {Name:mke4c35c218ad59b1da2c46074b57e71134fc7be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:25:10.842335  276743 start.go:364] duration metric: took 80.822µs to acquireMachinesLock for "newest-cni-256959"
	I1212 01:25:10.842366  276743 start.go:93] Provisioning new machine with config: &{Name:newest-cni-256959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1212 01:25:10.842439  276743 start.go:125] createHost starting for "" (driver="docker")
	I1212 01:25:10.846610  276743 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1212 01:25:10.846848  276743 start.go:159] libmachine.API.Create for "newest-cni-256959" (driver="docker")
	I1212 01:25:10.846884  276743 client.go:173] LocalClient.Create starting
	I1212 01:25:10.846956  276743 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem
	I1212 01:25:10.847041  276743 main.go:143] libmachine: Decoding PEM data...
	I1212 01:25:10.847062  276743 main.go:143] libmachine: Parsing certificate...
	I1212 01:25:10.847105  276743 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem
	I1212 01:25:10.847126  276743 main.go:143] libmachine: Decoding PEM data...
	I1212 01:25:10.847142  276743 main.go:143] libmachine: Parsing certificate...
	I1212 01:25:10.847512  276743 cli_runner.go:164] Run: docker network inspect newest-cni-256959 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 01:25:10.866623  276743 cli_runner.go:211] docker network inspect newest-cni-256959 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 01:25:10.866711  276743 network_create.go:284] running [docker network inspect newest-cni-256959] to gather additional debugging logs...
	I1212 01:25:10.866732  276743 cli_runner.go:164] Run: docker network inspect newest-cni-256959
	W1212 01:25:10.882830  276743 cli_runner.go:211] docker network inspect newest-cni-256959 returned with exit code 1
	I1212 01:25:10.882862  276743 network_create.go:287] error running [docker network inspect newest-cni-256959]: docker network inspect newest-cni-256959: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-256959 not found
	I1212 01:25:10.882876  276743 network_create.go:289] output of [docker network inspect newest-cni-256959]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-256959 not found
	
	** /stderr **
	I1212 01:25:10.883058  276743 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 01:25:10.899622  276743 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4cd687b06342 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:a2:e8:c8:87:d3:0a} reservation:<nil>}
	I1212 01:25:10.899939  276743 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-c02c16721c9d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3e:e7:06:63:2c:e9} reservation:<nil>}
	I1212 01:25:10.900288  276743 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-805b07ff58c0 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:be:18:35:7a:03:02} reservation:<nil>}
	I1212 01:25:10.900688  276743 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a309a0}
	I1212 01:25:10.900712  276743 network_create.go:124] attempt to create docker network newest-cni-256959 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1212 01:25:10.900767  276743 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-256959 newest-cni-256959
	I1212 01:25:10.956774  276743 network_create.go:108] docker network newest-cni-256959 192.168.76.0/24 created
	I1212 01:25:10.956809  276743 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-256959" container
	I1212 01:25:10.956884  276743 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 01:25:10.973399  276743 cli_runner.go:164] Run: docker volume create newest-cni-256959 --label name.minikube.sigs.k8s.io=newest-cni-256959 --label created_by.minikube.sigs.k8s.io=true
	I1212 01:25:10.995879  276743 oci.go:103] Successfully created a docker volume newest-cni-256959
	I1212 01:25:10.995970  276743 cli_runner.go:164] Run: docker run --rm --name newest-cni-256959-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-256959 --entrypoint /usr/bin/test -v newest-cni-256959:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1212 01:25:11.527236  276743 oci.go:107] Successfully prepared a docker volume newest-cni-256959
	I1212 01:25:11.527311  276743 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 01:25:11.527325  276743 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 01:25:11.527417  276743 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-256959:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 01:25:15.366008  276743 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-256959:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.838540611s)
	I1212 01:25:15.366040  276743 kic.go:203] duration metric: took 3.838711624s to extract preloaded images to volume ...
	W1212 01:25:15.366202  276743 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1212 01:25:15.366316  276743 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 01:25:15.418308  276743 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-256959 --name newest-cni-256959 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-256959 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-256959 --network newest-cni-256959 --ip 192.168.76.2 --volume newest-cni-256959:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1212 01:25:15.701138  276743 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Running}}
	I1212 01:25:15.722831  276743 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Status}}
	I1212 01:25:15.744776  276743 cli_runner.go:164] Run: docker exec newest-cni-256959 stat /var/lib/dpkg/alternatives/iptables
	I1212 01:25:15.800337  276743 oci.go:144] the created container "newest-cni-256959" has a running status.
	I1212 01:25:15.800363  276743 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa...
	I1212 01:25:16.255229  276743 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 01:25:16.278394  276743 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Status}}
	I1212 01:25:16.296982  276743 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 01:25:16.297007  276743 kic_runner.go:114] Args: [docker exec --privileged newest-cni-256959 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 01:25:16.336579  276743 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Status}}
	I1212 01:25:16.356755  276743 machine.go:94] provisionDockerMachine start ...
	I1212 01:25:16.356843  276743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:25:16.374168  276743 main.go:143] libmachine: Using SSH client type: native
	I1212 01:25:16.374501  276743 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1212 01:25:16.374511  276743 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 01:25:16.375249  276743 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42232->127.0.0.1:33093: read: connection reset by peer
	I1212 01:25:19.530494  276743 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-256959
	
	I1212 01:25:19.530520  276743 ubuntu.go:182] provisioning hostname "newest-cni-256959"
	I1212 01:25:19.530584  276743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:25:19.548139  276743 main.go:143] libmachine: Using SSH client type: native
	I1212 01:25:19.548459  276743 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1212 01:25:19.548475  276743 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-256959 && echo "newest-cni-256959" | sudo tee /etc/hostname
	I1212 01:25:19.704022  276743 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-256959
	
	I1212 01:25:19.704112  276743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:25:19.721641  276743 main.go:143] libmachine: Using SSH client type: native
	I1212 01:25:19.721955  276743 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1212 01:25:19.721980  276743 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-256959' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-256959/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-256959' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:25:19.879218  276743 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:25:19.879312  276743 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-2343/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-2343/.minikube}
	I1212 01:25:19.879367  276743 ubuntu.go:190] setting up certificates
	I1212 01:25:19.879397  276743 provision.go:84] configureAuth start
	I1212 01:25:19.879518  276743 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-256959
	I1212 01:25:19.896152  276743 provision.go:143] copyHostCerts
	I1212 01:25:19.896221  276743 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem, removing ...
	I1212 01:25:19.896234  276743 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem
	I1212 01:25:19.896315  276743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem (1675 bytes)
	I1212 01:25:19.896434  276743 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem, removing ...
	I1212 01:25:19.896445  276743 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem
	I1212 01:25:19.896476  276743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem (1082 bytes)
	I1212 01:25:19.896542  276743 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem, removing ...
	I1212 01:25:19.896551  276743 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem
	I1212 01:25:19.896577  276743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem (1123 bytes)
	I1212 01:25:19.896641  276743 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem org=jenkins.newest-cni-256959 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-256959]
	I1212 01:25:20.204760  276743 provision.go:177] copyRemoteCerts
	I1212 01:25:20.204827  276743 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:25:20.204875  276743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:25:20.224116  276743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:25:20.330622  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 01:25:20.348480  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 01:25:20.366287  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 01:25:20.383425  276743 provision.go:87] duration metric: took 503.997002ms to configureAuth
	I1212 01:25:20.383450  276743 ubuntu.go:206] setting minikube options for container-runtime
	I1212 01:25:20.383651  276743 config.go:182] Loaded profile config "newest-cni-256959": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 01:25:20.383658  276743 machine.go:97] duration metric: took 4.026884265s to provisionDockerMachine
	I1212 01:25:20.383665  276743 client.go:176] duration metric: took 9.536770098s to LocalClient.Create
	I1212 01:25:20.383678  276743 start.go:167] duration metric: took 9.536832859s to libmachine.API.Create "newest-cni-256959"
	I1212 01:25:20.383685  276743 start.go:293] postStartSetup for "newest-cni-256959" (driver="docker")
	I1212 01:25:20.383694  276743 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:25:20.383742  276743 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:25:20.383784  276743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:25:20.400325  276743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:25:20.507208  276743 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:25:20.510550  276743 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 01:25:20.510580  276743 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 01:25:20.510595  276743 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/addons for local assets ...
	I1212 01:25:20.510649  276743 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/files for local assets ...
	I1212 01:25:20.510733  276743 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem -> 42902.pem in /etc/ssl/certs
	I1212 01:25:20.510839  276743 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:25:20.518172  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /etc/ssl/certs/42902.pem (1708 bytes)
	I1212 01:25:20.536052  276743 start.go:296] duration metric: took 152.353471ms for postStartSetup
	I1212 01:25:20.536438  276743 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-256959
	I1212 01:25:20.555757  276743 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/config.json ...
	I1212 01:25:20.556035  276743 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 01:25:20.556076  276743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:25:20.580490  276743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:25:20.688189  276743 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 01:25:20.693057  276743 start.go:128] duration metric: took 9.850603168s to createHost
	I1212 01:25:20.693084  276743 start.go:83] releasing machines lock for "newest-cni-256959", held for 9.850734377s
	I1212 01:25:20.693172  276743 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-256959
	I1212 01:25:20.709859  276743 ssh_runner.go:195] Run: cat /version.json
	I1212 01:25:20.709914  276743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:25:20.710177  276743 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:25:20.710239  276743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:25:20.729457  276743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:25:20.741797  276743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:25:20.834710  276743 ssh_runner.go:195] Run: systemctl --version
	I1212 01:25:20.924313  276743 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:25:20.928847  276743 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:25:20.928951  276743 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:25:20.954175  276743 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1212 01:25:20.954200  276743 start.go:496] detecting cgroup driver to use...
	I1212 01:25:20.954231  276743 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 01:25:20.954281  276743 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 01:25:20.969656  276743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 01:25:20.982642  276743 docker.go:218] disabling cri-docker service (if available) ...
	I1212 01:25:20.982710  276743 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:25:20.999922  276743 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:25:21.020615  276743 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:25:21.141838  276743 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:25:21.263080  276743 docker.go:234] disabling docker service ...
	I1212 01:25:21.263148  276743 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:25:21.287246  276743 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:25:21.310750  276743 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:25:21.444187  276743 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:25:21.567374  276743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:25:21.580397  276743 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:25:21.594203  276743 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 01:25:21.603451  276743 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 01:25:21.612481  276743 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 01:25:21.612614  276743 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 01:25:21.621568  276743 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 01:25:21.630132  276743 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 01:25:21.639772  276743 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 01:25:21.648550  276743 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:25:21.656926  276743 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 01:25:21.666027  276743 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 01:25:21.675421  276743 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 01:25:21.684275  276743 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:25:21.692101  276743 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:25:21.699082  276743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:25:21.804978  276743 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 01:25:21.939895  276743 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1212 01:25:21.939976  276743 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1212 01:25:21.944016  276743 start.go:564] Will wait 60s for crictl version
	I1212 01:25:21.944158  276743 ssh_runner.go:195] Run: which crictl
	I1212 01:25:21.947592  276743 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 01:25:21.970388  276743 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1212 01:25:21.970506  276743 ssh_runner.go:195] Run: containerd --version
	I1212 01:25:21.989928  276743 ssh_runner.go:195] Run: containerd --version
	I1212 01:25:22.016197  276743 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1212 01:25:22.019317  276743 cli_runner.go:164] Run: docker network inspect newest-cni-256959 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 01:25:22.042635  276743 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1212 01:25:22.047510  276743 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:25:22.062564  276743 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1212 01:25:22.065397  276743 kubeadm.go:884] updating cluster {Name:newest-cni-256959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:25:22.065551  276743 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 01:25:22.065640  276743 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:25:22.102156  276743 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 01:25:22.102183  276743 containerd.go:534] Images already preloaded, skipping extraction
	I1212 01:25:22.102250  276743 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:25:22.129883  276743 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 01:25:22.129908  276743 cache_images.go:86] Images are preloaded, skipping loading
	I1212 01:25:22.129916  276743 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1212 01:25:22.130003  276743 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-256959 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:25:22.130072  276743 ssh_runner.go:195] Run: sudo crictl info
	I1212 01:25:22.157382  276743 cni.go:84] Creating CNI manager for ""
	I1212 01:25:22.157407  276743 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 01:25:22.157422  276743 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1212 01:25:22.157449  276743 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-256959 NodeName:newest-cni-256959 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 01:25:22.157566  276743 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-256959"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:25:22.157640  276743 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 01:25:22.165592  276743 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 01:25:22.165664  276743 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:25:22.173544  276743 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1212 01:25:22.186913  276743 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 01:25:22.199980  276743 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1212 01:25:22.212497  276743 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1212 01:25:22.216212  276743 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:25:22.226129  276743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:25:22.341565  276743 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:25:22.362735  276743 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959 for IP: 192.168.76.2
	I1212 01:25:22.362758  276743 certs.go:195] generating shared ca certs ...
	I1212 01:25:22.362774  276743 certs.go:227] acquiring lock for ca certs: {Name:mk18ed2fce74cbc4ee01c0f71e2dbdd98ccce1cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:25:22.362922  276743 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key
	I1212 01:25:22.362982  276743 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key
	I1212 01:25:22.363063  276743 certs.go:257] generating profile certs ...
	I1212 01:25:22.363128  276743 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/client.key
	I1212 01:25:22.363145  276743 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/client.crt with IP's: []
	I1212 01:25:23.043220  276743 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/client.crt ...
	I1212 01:25:23.043305  276743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/client.crt: {Name:mke800b4895a7f26c3f61118ac2a9636e3a9248a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:25:23.043557  276743 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/client.key ...
	I1212 01:25:23.043596  276743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/client.key: {Name:mkb2206776a08341de5b9d37086d859f3539aa54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:25:23.043743  276743 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.key.b05ecb93
	I1212 01:25:23.043783  276743 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.crt.b05ecb93 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1212 01:25:23.163980  276743 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.crt.b05ecb93 ...
	I1212 01:25:23.164017  276743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.crt.b05ecb93: {Name:mk05b9dd6b8930af6580fe78d40e6026f3e8847a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:25:23.164237  276743 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.key.b05ecb93 ...
	I1212 01:25:23.164254  276743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.key.b05ecb93: {Name:mk5e2ac6bbc37c39d5b319f8600a5d25e63c4a12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:25:23.164355  276743 certs.go:382] copying /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.crt.b05ecb93 -> /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.crt
	I1212 01:25:23.164449  276743 certs.go:386] copying /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.key.b05ecb93 -> /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.key
	I1212 01:25:23.164518  276743 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.key
	I1212 01:25:23.164541  276743 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.crt with IP's: []
	I1212 01:25:23.503416  276743 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.crt ...
	I1212 01:25:23.503453  276743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.crt: {Name:mka5a6a7cee07eb7c969d496d8aa380d667ba867 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:25:23.503635  276743 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.key ...
	I1212 01:25:23.503652  276743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.key: {Name:mkd0b1a9e86a7f90668157e83a73d06f56064ece Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:25:23.503848  276743 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem (1338 bytes)
	W1212 01:25:23.503900  276743 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290_empty.pem, impossibly tiny 0 bytes
	I1212 01:25:23.503913  276743 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 01:25:23.503965  276743 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem (1082 bytes)
	I1212 01:25:23.503999  276743 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:25:23.504032  276743 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem (1675 bytes)
	I1212 01:25:23.504080  276743 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem (1708 bytes)
	I1212 01:25:23.504696  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:25:23.524669  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 01:25:23.551586  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:25:23.572683  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:25:23.594234  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 01:25:23.612544  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 01:25:23.629869  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:25:23.647023  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 01:25:23.664698  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /usr/share/ca-certificates/42902.pem (1708 bytes)
	I1212 01:25:23.682123  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:25:23.699502  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem --> /usr/share/ca-certificates/4290.pem (1338 bytes)
	I1212 01:25:23.716689  276743 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:25:23.729602  276743 ssh_runner.go:195] Run: openssl version
	I1212 01:25:23.735703  276743 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4290.pem
	I1212 01:25:23.742851  276743 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4290.pem /etc/ssl/certs/4290.pem
	I1212 01:25:23.750077  276743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4290.pem
	I1212 01:25:23.753690  276743 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:06 /usr/share/ca-certificates/4290.pem
	I1212 01:25:23.753758  276743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4290.pem
	I1212 01:25:23.795206  276743 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 01:25:23.802639  276743 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4290.pem /etc/ssl/certs/51391683.0
	I1212 01:25:23.809951  276743 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42902.pem
	I1212 01:25:23.817139  276743 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42902.pem /etc/ssl/certs/42902.pem
	I1212 01:25:23.830841  276743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42902.pem
	I1212 01:25:23.834926  276743 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:06 /usr/share/ca-certificates/42902.pem
	I1212 01:25:23.835070  276743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42902.pem
	I1212 01:25:23.876422  276743 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 01:25:23.885502  276743 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/42902.pem /etc/ssl/certs/3ec20f2e.0
	I1212 01:25:23.893037  276743 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:25:23.900652  276743 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 01:25:23.908635  276743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:25:23.912614  276743 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:25:23.912690  276743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:25:23.956401  276743 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 01:25:23.964299  276743 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 01:25:23.971681  276743 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:25:23.975558  276743 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 01:25:23.975609  276743 kubeadm.go:401] StartCluster: {Name:newest-cni-256959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:25:23.975697  276743 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1212 01:25:23.975759  276743 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:25:24.003976  276743 cri.go:89] found id: ""
	I1212 01:25:24.004073  276743 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:25:24.014227  276743 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:25:24.022866  276743 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 01:25:24.022958  276743 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:25:24.031328  276743 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:25:24.031359  276743 kubeadm.go:158] found existing configuration files:
	
	I1212 01:25:24.031425  276743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:25:24.039632  276743 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:25:24.039710  276743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:25:24.047426  276743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:25:24.055269  276743 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:25:24.055386  276743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:25:24.062906  276743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:25:24.070757  276743 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:25:24.070846  276743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:25:24.078322  276743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:25:24.086235  276743 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:25:24.086340  276743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:25:24.093978  276743 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 01:25:24.130495  276743 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 01:25:24.130556  276743 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 01:25:24.204494  276743 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 01:25:24.204576  276743 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 01:25:24.204617  276743 kubeadm.go:319] OS: Linux
	I1212 01:25:24.204667  276743 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 01:25:24.204719  276743 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 01:25:24.204770  276743 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 01:25:24.204821  276743 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 01:25:24.204871  276743 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 01:25:24.204928  276743 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 01:25:24.204978  276743 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 01:25:24.205039  276743 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 01:25:24.205089  276743 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 01:25:24.274059  276743 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:25:24.274248  276743 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:25:24.274393  276743 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:25:24.281432  276743 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:25:24.288265  276743 out.go:252]   - Generating certificates and keys ...
	I1212 01:25:24.288438  276743 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 01:25:24.288544  276743 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 01:25:24.872395  276743 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 01:25:24.948048  276743 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 01:25:25.302518  276743 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 01:25:25.648856  276743 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 01:25:25.789938  276743 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 01:25:25.790397  276743 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-256959] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1212 01:25:26.099340  276743 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 01:25:26.099559  276743 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-256959] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1212 01:25:26.538607  276743 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 01:25:27.389042  276743 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 01:25:27.842473  276743 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 01:25:27.842877  276743 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:25:27.936371  276743 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:25:28.210661  276743 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 01:25:28.314836  276743 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:25:28.428208  276743 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:25:28.580595  276743 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:25:28.581418  276743 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:25:28.584199  276743 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:25:28.587820  276743 out.go:252]   - Booting up control plane ...
	I1212 01:25:28.587929  276743 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:25:28.588012  276743 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:25:28.589356  276743 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:25:28.605527  276743 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:25:28.605678  276743 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 01:25:28.613455  276743 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 01:25:28.614074  276743 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:25:28.614240  276743 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 01:25:28.755452  276743 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 01:25:28.755580  276743 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 01:27:19.226422  268396 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000962088s
	I1212 01:27:19.226635  268396 kubeadm.go:319] 
	I1212 01:27:19.226702  268396 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 01:27:19.226735  268396 kubeadm.go:319] 	- The kubelet is not running
	I1212 01:27:19.226840  268396 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 01:27:19.226847  268396 kubeadm.go:319] 
	I1212 01:27:19.227012  268396 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 01:27:19.227062  268396 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 01:27:19.227095  268396 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 01:27:19.227099  268396 kubeadm.go:319] 
	I1212 01:27:19.231490  268396 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 01:27:19.231948  268396 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 01:27:19.232070  268396 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:27:19.232304  268396 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 01:27:19.232318  268396 kubeadm.go:319] 
	W1212 01:27:19.232506  268396 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-361053] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-361053] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000962088s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 01:27:19.232600  268396 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1212 01:27:19.232891  268396 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 01:27:19.641819  268396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:27:19.655717  268396 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 01:27:19.655786  268396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:27:19.664059  268396 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:27:19.664079  268396 kubeadm.go:158] found existing configuration files:
	
	I1212 01:27:19.664128  268396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:27:19.672510  268396 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:27:19.672575  268396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:27:19.680342  268396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:27:19.688315  268396 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:27:19.688383  268396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:27:19.696209  268396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:27:19.704155  268396 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:27:19.704219  268396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:27:19.711899  268396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:27:19.719844  268396 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:27:19.719910  268396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:27:19.727687  268396 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 01:27:19.860959  268396 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 01:27:19.861382  268396 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 01:27:19.927748  268396 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:29:28.751512  276743 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000466498s
	I1212 01:29:28.751546  276743 kubeadm.go:319] 
	I1212 01:29:28.751605  276743 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 01:29:28.751644  276743 kubeadm.go:319] 	- The kubelet is not running
	I1212 01:29:28.751765  276743 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 01:29:28.751774  276743 kubeadm.go:319] 
	I1212 01:29:28.751883  276743 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 01:29:28.751919  276743 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 01:29:28.751963  276743 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 01:29:28.751972  276743 kubeadm.go:319] 
	I1212 01:29:28.757988  276743 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 01:29:28.758457  276743 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 01:29:28.758593  276743 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:29:28.759136  276743 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 01:29:28.759148  276743 kubeadm.go:319] 
	I1212 01:29:28.759296  276743 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1212 01:29:28.759448  276743 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-256959] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-256959] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000466498s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 01:29:28.759537  276743 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1212 01:29:29.171145  276743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:29:29.184061  276743 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 01:29:29.184150  276743 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:29:29.191792  276743 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:29:29.191813  276743 kubeadm.go:158] found existing configuration files:
	
	I1212 01:29:29.191872  276743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:29:29.199430  276743 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:29:29.199502  276743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:29:29.206493  276743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:29:29.213869  276743 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:29:29.213974  276743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:29:29.221146  276743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:29:29.228771  276743 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:29:29.228848  276743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:29:29.236019  276743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:29:29.243394  276743 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:29:29.243513  276743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:29:29.250760  276743 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 01:29:29.289424  276743 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 01:29:29.289525  276743 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 01:29:29.367460  276743 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 01:29:29.367532  276743 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 01:29:29.367572  276743 kubeadm.go:319] OS: Linux
	I1212 01:29:29.367620  276743 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 01:29:29.367668  276743 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 01:29:29.367716  276743 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 01:29:29.367765  276743 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 01:29:29.367814  276743 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 01:29:29.367862  276743 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 01:29:29.367907  276743 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 01:29:29.367956  276743 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 01:29:29.368003  276743 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 01:29:29.435977  276743 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:29:29.436136  276743 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:29:29.436234  276743 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:29:29.447414  276743 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:29:29.452711  276743 out.go:252]   - Generating certificates and keys ...
	I1212 01:29:29.452896  276743 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 01:29:29.452999  276743 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 01:29:29.453121  276743 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:29:29.453232  276743 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:29:29.453362  276743 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:29:29.453468  276743 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 01:29:29.453582  276743 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:29:29.453693  276743 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:29:29.453811  276743 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:29:29.453920  276743 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:29:29.453981  276743 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 01:29:29.454074  276743 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:29:29.661293  276743 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:29:29.926167  276743 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 01:29:30.228322  276743 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:29:30.325953  276743 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:29:30.468055  276743 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:29:30.469327  276743 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:29:30.473394  276743 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:29:30.478856  276743 out.go:252]   - Booting up control plane ...
	I1212 01:29:30.478958  276743 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:29:30.479046  276743 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:29:30.479115  276743 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:29:30.498715  276743 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:29:30.498819  276743 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 01:29:30.506278  276743 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 01:29:30.506595  276743 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:29:30.506638  276743 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 01:29:30.667439  276743 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 01:29:30.667560  276743 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 01:31:22.255083  268396 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 01:31:22.255118  268396 kubeadm.go:319] 
	I1212 01:31:22.255185  268396 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 01:31:22.259224  268396 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 01:31:22.259291  268396 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 01:31:22.259384  268396 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 01:31:22.259445  268396 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 01:31:22.259485  268396 kubeadm.go:319] OS: Linux
	I1212 01:31:22.259534  268396 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 01:31:22.259586  268396 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 01:31:22.259638  268396 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 01:31:22.259689  268396 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 01:31:22.259742  268396 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 01:31:22.259793  268396 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 01:31:22.259842  268396 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 01:31:22.259894  268396 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 01:31:22.259943  268396 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 01:31:22.260016  268396 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:31:22.260113  268396 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:31:22.260208  268396 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:31:22.260274  268396 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:31:22.264965  268396 out.go:252]   - Generating certificates and keys ...
	I1212 01:31:22.265061  268396 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 01:31:22.265129  268396 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 01:31:22.265205  268396 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:31:22.265267  268396 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:31:22.265335  268396 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:31:22.265389  268396 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 01:31:22.265452  268396 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:31:22.265511  268396 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:31:22.265581  268396 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:31:22.265657  268396 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:31:22.265698  268396 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 01:31:22.265754  268396 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:31:22.265805  268396 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:31:22.265863  268396 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 01:31:22.265922  268396 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:31:22.265985  268396 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:31:22.266040  268396 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:31:22.266122  268396 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:31:22.266188  268396 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:31:22.269011  268396 out.go:252]   - Booting up control plane ...
	I1212 01:31:22.269113  268396 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:31:22.269196  268396 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:31:22.269313  268396 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:31:22.269458  268396 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:31:22.269587  268396 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 01:31:22.269697  268396 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 01:31:22.269820  268396 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:31:22.269866  268396 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 01:31:22.270050  268396 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 01:31:22.270170  268396 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 01:31:22.270256  268396 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001218388s
	I1212 01:31:22.270267  268396 kubeadm.go:319] 
	I1212 01:31:22.270326  268396 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 01:31:22.270369  268396 kubeadm.go:319] 	- The kubelet is not running
	I1212 01:31:22.270483  268396 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 01:31:22.270503  268396 kubeadm.go:319] 
	I1212 01:31:22.270616  268396 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 01:31:22.270657  268396 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 01:31:22.270717  268396 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 01:31:22.270757  268396 kubeadm.go:319] 
	I1212 01:31:22.270858  268396 kubeadm.go:403] duration metric: took 8m7.867624823s to StartCluster
	I1212 01:31:22.270898  268396 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:31:22.270968  268396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:31:22.306968  268396 cri.go:89] found id: ""
	I1212 01:31:22.307036  268396 logs.go:282] 0 containers: []
	W1212 01:31:22.307047  268396 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:31:22.307054  268396 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:31:22.307137  268396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:31:22.339653  268396 cri.go:89] found id: ""
	I1212 01:31:22.339689  268396 logs.go:282] 0 containers: []
	W1212 01:31:22.339700  268396 logs.go:284] No container was found matching "etcd"
	I1212 01:31:22.339706  268396 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:31:22.339765  268396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:31:22.368586  268396 cri.go:89] found id: ""
	I1212 01:31:22.368607  268396 logs.go:282] 0 containers: []
	W1212 01:31:22.368615  268396 logs.go:284] No container was found matching "coredns"
	I1212 01:31:22.368621  268396 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:31:22.368680  268396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:31:22.393839  268396 cri.go:89] found id: ""
	I1212 01:31:22.393912  268396 logs.go:282] 0 containers: []
	W1212 01:31:22.393934  268396 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:31:22.393960  268396 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:31:22.394035  268396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:31:22.419583  268396 cri.go:89] found id: ""
	I1212 01:31:22.419608  268396 logs.go:282] 0 containers: []
	W1212 01:31:22.419616  268396 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:31:22.419622  268396 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:31:22.419680  268396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:31:22.448415  268396 cri.go:89] found id: ""
	I1212 01:31:22.448443  268396 logs.go:282] 0 containers: []
	W1212 01:31:22.448451  268396 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:31:22.448459  268396 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:31:22.448517  268396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:31:22.476913  268396 cri.go:89] found id: ""
	I1212 01:31:22.476939  268396 logs.go:282] 0 containers: []
	W1212 01:31:22.476947  268396 logs.go:284] No container was found matching "kindnet"
	I1212 01:31:22.476956  268396 logs.go:123] Gathering logs for kubelet ...
	I1212 01:31:22.476983  268396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:31:22.533409  268396 logs.go:123] Gathering logs for dmesg ...
	I1212 01:31:22.533444  268396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:31:22.548368  268396 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:31:22.548401  268396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:31:22.614148  268396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:31:22.605942    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:22.606490    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:22.608232    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:22.608633    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:22.610124    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:31:22.605942    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:22.606490    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:22.608232    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:22.608633    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:22.610124    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:31:22.614173  268396 logs.go:123] Gathering logs for containerd ...
	I1212 01:31:22.614185  268396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:31:22.656511  268396 logs.go:123] Gathering logs for container status ...
	I1212 01:31:22.656543  268396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:31:22.687238  268396 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001218388s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 01:31:22.687348  268396 out.go:285] * 
	W1212 01:31:22.687426  268396 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001218388s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 01:31:22.687441  268396 out.go:285] * 
	W1212 01:31:22.689841  268396 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 01:31:22.695133  268396 out.go:203] 
	W1212 01:31:22.698069  268396 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001218388s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 01:31:22.698114  268396 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 01:31:22.698136  268396 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 01:31:22.701165  268396 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 12 01:23:04 no-preload-361053 containerd[760]: time="2025-12-12T01:23:04.384035480Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:05 no-preload-361053 containerd[760]: time="2025-12-12T01:23:05.373522512Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 12 01:23:05 no-preload-361053 containerd[760]: time="2025-12-12T01:23:05.375857443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 12 01:23:05 no-preload-361053 containerd[760]: time="2025-12-12T01:23:05.392808161Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:05 no-preload-361053 containerd[760]: time="2025-12-12T01:23:05.393494461Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:06 no-preload-361053 containerd[760]: time="2025-12-12T01:23:06.469307146Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 12 01:23:06 no-preload-361053 containerd[760]: time="2025-12-12T01:23:06.471770579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 12 01:23:06 no-preload-361053 containerd[760]: time="2025-12-12T01:23:06.487399898Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:06 no-preload-361053 containerd[760]: time="2025-12-12T01:23:06.488294224Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:07 no-preload-361053 containerd[760]: time="2025-12-12T01:23:07.584315785Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 12 01:23:07 no-preload-361053 containerd[760]: time="2025-12-12T01:23:07.586361959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 12 01:23:07 no-preload-361053 containerd[760]: time="2025-12-12T01:23:07.593980543Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:07 no-preload-361053 containerd[760]: time="2025-12-12T01:23:07.594678428Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:09 no-preload-361053 containerd[760]: time="2025-12-12T01:23:09.125818664Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 12 01:23:09 no-preload-361053 containerd[760]: time="2025-12-12T01:23:09.128180286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 12 01:23:09 no-preload-361053 containerd[760]: time="2025-12-12T01:23:09.138535463Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:09 no-preload-361053 containerd[760]: time="2025-12-12T01:23:09.139822900Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:10 no-preload-361053 containerd[760]: time="2025-12-12T01:23:10.221236720Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 12 01:23:10 no-preload-361053 containerd[760]: time="2025-12-12T01:23:10.223326176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 12 01:23:10 no-preload-361053 containerd[760]: time="2025-12-12T01:23:10.237305395Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:10 no-preload-361053 containerd[760]: time="2025-12-12T01:23:10.238695242Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:10 no-preload-361053 containerd[760]: time="2025-12-12T01:23:10.594370471Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 12 01:23:10 no-preload-361053 containerd[760]: time="2025-12-12T01:23:10.596617443Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 12 01:23:10 no-preload-361053 containerd[760]: time="2025-12-12T01:23:10.603750918Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:10 no-preload-361053 containerd[760]: time="2025-12-12T01:23:10.604059262Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:31:26.845365    5824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:26.846009    5824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:26.847749    5824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:26.848281    5824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:26.850016    5824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec11 23:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014465] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.504479] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.038126] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.726220] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.947343] kauditd_printk_skb: 36 callbacks suppressed
	[Dec12 00:40] hrtimer: interrupt took 11339963 ns
	
	
	==> kernel <==
	 01:31:26 up  2:13,  0 user,  load average: 0.42, 1.22, 1.91
	Linux no-preload-361053 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 01:31:23 no-preload-361053 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:31:24 no-preload-361053 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 12 01:31:24 no-preload-361053 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:31:24 no-preload-361053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:31:24 no-preload-361053 kubelet[5599]: E1212 01:31:24.586362    5599 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:31:24 no-preload-361053 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:31:24 no-preload-361053 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:31:25 no-preload-361053 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 12 01:31:25 no-preload-361053 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:31:25 no-preload-361053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:31:25 no-preload-361053 kubelet[5691]: E1212 01:31:25.341040    5691 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:31:25 no-preload-361053 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:31:25 no-preload-361053 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:31:26 no-preload-361053 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 325.
	Dec 12 01:31:26 no-preload-361053 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:31:26 no-preload-361053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:31:26 no-preload-361053 kubelet[5731]: E1212 01:31:26.100632    5731 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:31:26 no-preload-361053 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:31:26 no-preload-361053 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:31:26 no-preload-361053 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 326.
	Dec 12 01:31:26 no-preload-361053 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:31:26 no-preload-361053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:31:26 no-preload-361053 kubelet[5829]: E1212 01:31:26.839459    5829 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:31:26 no-preload-361053 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:31:26 no-preload-361053 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-361053 -n no-preload-361053
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-361053 -n no-preload-361053: exit status 6 (370.210039ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 01:31:27.341801  284969 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-361053" does not appear in /home/jenkins/minikube-integration/22101-2343/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-361053" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (3.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (101.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-361053 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1212 01:31:46.351770    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/default-k8s-diff-port-971096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-361053 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m39.980095826s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-361053 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-361053 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-361053 describe deploy/metrics-server -n kube-system: exit status 1 (70.078946ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-361053" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-361053 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-361053
helpers_test.go:244: (dbg) docker inspect no-preload-361053:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd",
	        "Created": "2025-12-12T01:22:53.604240637Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 268910,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T01:22:53.788312247Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd/hostname",
	        "HostsPath": "/var/lib/docker/containers/68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd/hosts",
	        "LogPath": "/var/lib/docker/containers/68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd/68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd-json.log",
	        "Name": "/no-preload-361053",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-361053:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-361053",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd",
	                "LowerDir": "/var/lib/docker/overlay2/9169bb0f18d4483461fe8404c60d5cb61a51170a9dcf8e503c392f8c9d01abee-init/diff:/var/lib/docker/overlay2/4c0e5370e4fd7b4e6c6a79620ef377d7d55826709cd277e0cfa49c6005af0314/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9169bb0f18d4483461fe8404c60d5cb61a51170a9dcf8e503c392f8c9d01abee/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9169bb0f18d4483461fe8404c60d5cb61a51170a9dcf8e503c392f8c9d01abee/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9169bb0f18d4483461fe8404c60d5cb61a51170a9dcf8e503c392f8c9d01abee/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-361053",
	                "Source": "/var/lib/docker/volumes/no-preload-361053/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-361053",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-361053",
	                "name.minikube.sigs.k8s.io": "no-preload-361053",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6d73be6e1f66a3f7c6d96dca30aa8c1389affdac21224c7034e0e227db3e8397",
	            "SandboxKey": "/var/run/docker/netns/6d73be6e1f66",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-361053": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:21:58:59:ae:af",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ee086efedb5c3900c251cd31f9316499408470e70a7d486e64d8b91c6bf60cd7",
	                    "EndpointID": "ae778ff101bac87a43f1ea9fade85a6810900e2d9b74a07254c68fbc89db3f07",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-361053",
	                        "68256fe8de3b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-361053 -n no-preload-361053
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-361053 -n no-preload-361053: exit status 6 (319.94276ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 01:33:07.732564  286686 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-361053" does not appear in /home/jenkins/minikube-integration/22101-2343/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-361053 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ start   │ -p default-k8s-diff-port-971096 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:21 UTC │ 12 Dec 25 01:22 UTC │
	│ image   │ old-k8s-version-147581 image list --format=json                                                                                                                                                                                                            │ old-k8s-version-147581       │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ pause   │ -p old-k8s-version-147581 --alsologtostderr -v=1                                                                                                                                                                                                           │ old-k8s-version-147581       │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ unpause │ -p old-k8s-version-147581 --alsologtostderr -v=1                                                                                                                                                                                                           │ old-k8s-version-147581       │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ delete  │ -p old-k8s-version-147581                                                                                                                                                                                                                                  │ old-k8s-version-147581       │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ delete  │ -p old-k8s-version-147581                                                                                                                                                                                                                                  │ old-k8s-version-147581       │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ start   │ -p embed-certs-648696 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:23 UTC │
	│ image   │ default-k8s-diff-port-971096 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ pause   │ -p default-k8s-diff-port-971096 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ unpause │ -p default-k8s-diff-port-971096 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ delete  │ -p default-k8s-diff-port-971096                                                                                                                                                                                                                            │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ delete  │ -p default-k8s-diff-port-971096                                                                                                                                                                                                                            │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ delete  │ -p disable-driver-mounts-539158                                                                                                                                                                                                                            │ disable-driver-mounts-539158 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ start   │ -p no-preload-361053 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-361053            │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-648696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ stop    │ -p embed-certs-648696 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ addons  │ enable dashboard -p embed-certs-648696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ start   │ -p embed-certs-648696 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:24 UTC │
	│ image   │ embed-certs-648696 image list --format=json                                                                                                                                                                                                                │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ pause   │ -p embed-certs-648696 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ unpause │ -p embed-certs-648696 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ delete  │ -p embed-certs-648696                                                                                                                                                                                                                                      │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ delete  │ -p embed-certs-648696                                                                                                                                                                                                                                      │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ start   │ -p newest-cni-256959 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-256959            │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-361053 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-361053            │ jenkins │ v1.37.0 │ 12 Dec 25 01:31 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 01:25:10
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 01:25:10.610326  276743 out.go:360] Setting OutFile to fd 1 ...
	I1212 01:25:10.611013  276743 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:25:10.611028  276743 out.go:374] Setting ErrFile to fd 2...
	I1212 01:25:10.611033  276743 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:25:10.611296  276743 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	I1212 01:25:10.611727  276743 out.go:368] Setting JSON to false
	I1212 01:25:10.612585  276743 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7657,"bootTime":1765495054,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1212 01:25:10.612655  276743 start.go:143] virtualization:  
	I1212 01:25:10.616537  276743 out.go:179] * [newest-cni-256959] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 01:25:10.620721  276743 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 01:25:10.620800  276743 notify.go:221] Checking for updates...
	I1212 01:25:10.627029  276743 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 01:25:10.630074  276743 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 01:25:10.633037  276743 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	I1212 01:25:10.635913  276743 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 01:25:10.638863  276743 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 01:25:10.642342  276743 config.go:182] Loaded profile config "no-preload-361053": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 01:25:10.642439  276743 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 01:25:10.663336  276743 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 01:25:10.663491  276743 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:25:10.731650  276743 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 01:25:10.720876374 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:25:10.731750  276743 docker.go:319] overlay module found
	I1212 01:25:10.734985  276743 out.go:179] * Using the docker driver based on user configuration
	I1212 01:25:10.738034  276743 start.go:309] selected driver: docker
	I1212 01:25:10.738050  276743 start.go:927] validating driver "docker" against <nil>
	I1212 01:25:10.738062  276743 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 01:25:10.738778  276743 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:25:10.802614  276743 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 01:25:10.791452052 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:25:10.802835  276743 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1212 01:25:10.802872  276743 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1212 01:25:10.803130  276743 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1212 01:25:10.806190  276743 out.go:179] * Using Docker driver with root privileges
	I1212 01:25:10.809707  276743 cni.go:84] Creating CNI manager for ""
	I1212 01:25:10.809779  276743 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 01:25:10.809793  276743 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 01:25:10.809867  276743 start.go:353] cluster config:
	{Name:newest-cni-256959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:25:10.813089  276743 out.go:179] * Starting "newest-cni-256959" primary control-plane node in "newest-cni-256959" cluster
	I1212 01:25:10.815885  276743 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1212 01:25:10.818744  276743 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 01:25:10.821553  276743 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 01:25:10.821612  276743 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1212 01:25:10.821626  276743 cache.go:65] Caching tarball of preloaded images
	I1212 01:25:10.821716  276743 preload.go:238] Found /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1212 01:25:10.821731  276743 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1212 01:25:10.821841  276743 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/config.json ...
	I1212 01:25:10.821864  276743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/config.json: {Name:mk4998d8ef384508a1b134495f81d7fc826b1990 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:25:10.822019  276743 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 01:25:10.842165  276743 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 01:25:10.842191  276743 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 01:25:10.842204  276743 cache.go:243] Successfully downloaded all kic artifacts
	I1212 01:25:10.842235  276743 start.go:360] acquireMachinesLock for newest-cni-256959: {Name:mke4c35c218ad59b1da2c46074b57e71134fc7be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:25:10.842335  276743 start.go:364] duration metric: took 80.822µs to acquireMachinesLock for "newest-cni-256959"
	I1212 01:25:10.842366  276743 start.go:93] Provisioning new machine with config: &{Name:newest-cni-256959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1212 01:25:10.842439  276743 start.go:125] createHost starting for "" (driver="docker")
	I1212 01:25:10.846610  276743 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1212 01:25:10.846848  276743 start.go:159] libmachine.API.Create for "newest-cni-256959" (driver="docker")
	I1212 01:25:10.846884  276743 client.go:173] LocalClient.Create starting
	I1212 01:25:10.846956  276743 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem
	I1212 01:25:10.847041  276743 main.go:143] libmachine: Decoding PEM data...
	I1212 01:25:10.847062  276743 main.go:143] libmachine: Parsing certificate...
	I1212 01:25:10.847105  276743 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem
	I1212 01:25:10.847126  276743 main.go:143] libmachine: Decoding PEM data...
	I1212 01:25:10.847142  276743 main.go:143] libmachine: Parsing certificate...
	I1212 01:25:10.847512  276743 cli_runner.go:164] Run: docker network inspect newest-cni-256959 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 01:25:10.866623  276743 cli_runner.go:211] docker network inspect newest-cni-256959 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 01:25:10.866711  276743 network_create.go:284] running [docker network inspect newest-cni-256959] to gather additional debugging logs...
	I1212 01:25:10.866732  276743 cli_runner.go:164] Run: docker network inspect newest-cni-256959
	W1212 01:25:10.882830  276743 cli_runner.go:211] docker network inspect newest-cni-256959 returned with exit code 1
	I1212 01:25:10.882862  276743 network_create.go:287] error running [docker network inspect newest-cni-256959]: docker network inspect newest-cni-256959: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-256959 not found
	I1212 01:25:10.882876  276743 network_create.go:289] output of [docker network inspect newest-cni-256959]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-256959 not found
	
	** /stderr **
	I1212 01:25:10.883058  276743 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 01:25:10.899622  276743 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4cd687b06342 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:a2:e8:c8:87:d3:0a} reservation:<nil>}
	I1212 01:25:10.899939  276743 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-c02c16721c9d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3e:e7:06:63:2c:e9} reservation:<nil>}
	I1212 01:25:10.900288  276743 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-805b07ff58c0 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:be:18:35:7a:03:02} reservation:<nil>}
	I1212 01:25:10.900688  276743 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a309a0}
	I1212 01:25:10.900712  276743 network_create.go:124] attempt to create docker network newest-cni-256959 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1212 01:25:10.900767  276743 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-256959 newest-cni-256959
	I1212 01:25:10.956774  276743 network_create.go:108] docker network newest-cni-256959 192.168.76.0/24 created
	I1212 01:25:10.956809  276743 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-256959" container
	I1212 01:25:10.956884  276743 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 01:25:10.973399  276743 cli_runner.go:164] Run: docker volume create newest-cni-256959 --label name.minikube.sigs.k8s.io=newest-cni-256959 --label created_by.minikube.sigs.k8s.io=true
	I1212 01:25:10.995879  276743 oci.go:103] Successfully created a docker volume newest-cni-256959
	I1212 01:25:10.995970  276743 cli_runner.go:164] Run: docker run --rm --name newest-cni-256959-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-256959 --entrypoint /usr/bin/test -v newest-cni-256959:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1212 01:25:11.527236  276743 oci.go:107] Successfully prepared a docker volume newest-cni-256959
	I1212 01:25:11.527311  276743 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 01:25:11.527325  276743 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 01:25:11.527417  276743 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-256959:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 01:25:15.366008  276743 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-256959:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.838540611s)
	I1212 01:25:15.366040  276743 kic.go:203] duration metric: took 3.838711624s to extract preloaded images to volume ...
	W1212 01:25:15.366202  276743 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1212 01:25:15.366316  276743 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 01:25:15.418308  276743 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-256959 --name newest-cni-256959 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-256959 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-256959 --network newest-cni-256959 --ip 192.168.76.2 --volume newest-cni-256959:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1212 01:25:15.701138  276743 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Running}}
	I1212 01:25:15.722831  276743 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Status}}
	I1212 01:25:15.744776  276743 cli_runner.go:164] Run: docker exec newest-cni-256959 stat /var/lib/dpkg/alternatives/iptables
	I1212 01:25:15.800337  276743 oci.go:144] the created container "newest-cni-256959" has a running status.
	I1212 01:25:15.800363  276743 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa...
	I1212 01:25:16.255229  276743 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 01:25:16.278394  276743 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Status}}
	I1212 01:25:16.296982  276743 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 01:25:16.297007  276743 kic_runner.go:114] Args: [docker exec --privileged newest-cni-256959 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 01:25:16.336579  276743 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Status}}
	I1212 01:25:16.356755  276743 machine.go:94] provisionDockerMachine start ...
	I1212 01:25:16.356843  276743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:25:16.374168  276743 main.go:143] libmachine: Using SSH client type: native
	I1212 01:25:16.374501  276743 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1212 01:25:16.374511  276743 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 01:25:16.375249  276743 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42232->127.0.0.1:33093: read: connection reset by peer
	I1212 01:25:19.530494  276743 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-256959
	
	I1212 01:25:19.530520  276743 ubuntu.go:182] provisioning hostname "newest-cni-256959"
	I1212 01:25:19.530584  276743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:25:19.548139  276743 main.go:143] libmachine: Using SSH client type: native
	I1212 01:25:19.548459  276743 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1212 01:25:19.548475  276743 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-256959 && echo "newest-cni-256959" | sudo tee /etc/hostname
	I1212 01:25:19.704022  276743 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-256959
	
	I1212 01:25:19.704112  276743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:25:19.721641  276743 main.go:143] libmachine: Using SSH client type: native
	I1212 01:25:19.721955  276743 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1212 01:25:19.721980  276743 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-256959' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-256959/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-256959' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:25:19.879218  276743 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:25:19.879312  276743 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-2343/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-2343/.minikube}
	I1212 01:25:19.879367  276743 ubuntu.go:190] setting up certificates
	I1212 01:25:19.879397  276743 provision.go:84] configureAuth start
	I1212 01:25:19.879518  276743 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-256959
	I1212 01:25:19.896152  276743 provision.go:143] copyHostCerts
	I1212 01:25:19.896221  276743 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem, removing ...
	I1212 01:25:19.896234  276743 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem
	I1212 01:25:19.896315  276743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem (1675 bytes)
	I1212 01:25:19.896434  276743 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem, removing ...
	I1212 01:25:19.896445  276743 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem
	I1212 01:25:19.896476  276743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem (1082 bytes)
	I1212 01:25:19.896542  276743 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem, removing ...
	I1212 01:25:19.896551  276743 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem
	I1212 01:25:19.896577  276743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem (1123 bytes)
	I1212 01:25:19.896641  276743 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem org=jenkins.newest-cni-256959 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-256959]
	I1212 01:25:20.204760  276743 provision.go:177] copyRemoteCerts
	I1212 01:25:20.204827  276743 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:25:20.204875  276743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:25:20.224116  276743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:25:20.330622  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 01:25:20.348480  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 01:25:20.366287  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 01:25:20.383425  276743 provision.go:87] duration metric: took 503.997002ms to configureAuth
	I1212 01:25:20.383450  276743 ubuntu.go:206] setting minikube options for container-runtime
	I1212 01:25:20.383651  276743 config.go:182] Loaded profile config "newest-cni-256959": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 01:25:20.383658  276743 machine.go:97] duration metric: took 4.026884265s to provisionDockerMachine
	I1212 01:25:20.383665  276743 client.go:176] duration metric: took 9.536770098s to LocalClient.Create
	I1212 01:25:20.383678  276743 start.go:167] duration metric: took 9.536832859s to libmachine.API.Create "newest-cni-256959"
	I1212 01:25:20.383685  276743 start.go:293] postStartSetup for "newest-cni-256959" (driver="docker")
	I1212 01:25:20.383694  276743 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:25:20.383742  276743 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:25:20.383784  276743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:25:20.400325  276743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:25:20.507208  276743 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:25:20.510550  276743 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 01:25:20.510580  276743 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 01:25:20.510595  276743 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/addons for local assets ...
	I1212 01:25:20.510649  276743 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/files for local assets ...
	I1212 01:25:20.510733  276743 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem -> 42902.pem in /etc/ssl/certs
	I1212 01:25:20.510839  276743 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:25:20.518172  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /etc/ssl/certs/42902.pem (1708 bytes)
	I1212 01:25:20.536052  276743 start.go:296] duration metric: took 152.353471ms for postStartSetup
	I1212 01:25:20.536438  276743 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-256959
	I1212 01:25:20.555757  276743 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/config.json ...
	I1212 01:25:20.556035  276743 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 01:25:20.556076  276743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:25:20.580490  276743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:25:20.688189  276743 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 01:25:20.693057  276743 start.go:128] duration metric: took 9.850603168s to createHost
	I1212 01:25:20.693084  276743 start.go:83] releasing machines lock for "newest-cni-256959", held for 9.850734377s
	I1212 01:25:20.693172  276743 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-256959
	I1212 01:25:20.709859  276743 ssh_runner.go:195] Run: cat /version.json
	I1212 01:25:20.709914  276743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:25:20.710177  276743 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:25:20.710239  276743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:25:20.729457  276743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:25:20.741797  276743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:25:20.834710  276743 ssh_runner.go:195] Run: systemctl --version
	I1212 01:25:20.924313  276743 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:25:20.928847  276743 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:25:20.928951  276743 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:25:20.954175  276743 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1212 01:25:20.954200  276743 start.go:496] detecting cgroup driver to use...
	I1212 01:25:20.954231  276743 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 01:25:20.954281  276743 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 01:25:20.969656  276743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 01:25:20.982642  276743 docker.go:218] disabling cri-docker service (if available) ...
	I1212 01:25:20.982710  276743 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:25:20.999922  276743 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:25:21.020615  276743 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:25:21.141838  276743 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:25:21.263080  276743 docker.go:234] disabling docker service ...
	I1212 01:25:21.263148  276743 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:25:21.287246  276743 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:25:21.310750  276743 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:25:21.444187  276743 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:25:21.567374  276743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:25:21.580397  276743 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:25:21.594203  276743 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 01:25:21.603451  276743 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 01:25:21.612481  276743 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 01:25:21.612614  276743 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 01:25:21.621568  276743 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 01:25:21.630132  276743 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 01:25:21.639772  276743 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 01:25:21.648550  276743 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:25:21.656926  276743 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 01:25:21.666027  276743 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 01:25:21.675421  276743 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 01:25:21.684275  276743 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:25:21.692101  276743 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:25:21.699082  276743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:25:21.804978  276743 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 01:25:21.939895  276743 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1212 01:25:21.939976  276743 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1212 01:25:21.944016  276743 start.go:564] Will wait 60s for crictl version
	I1212 01:25:21.944158  276743 ssh_runner.go:195] Run: which crictl
	I1212 01:25:21.947592  276743 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 01:25:21.970388  276743 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1212 01:25:21.970506  276743 ssh_runner.go:195] Run: containerd --version
	I1212 01:25:21.989928  276743 ssh_runner.go:195] Run: containerd --version
	I1212 01:25:22.016197  276743 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1212 01:25:22.019317  276743 cli_runner.go:164] Run: docker network inspect newest-cni-256959 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 01:25:22.042635  276743 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1212 01:25:22.047510  276743 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:25:22.062564  276743 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1212 01:25:22.065397  276743 kubeadm.go:884] updating cluster {Name:newest-cni-256959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:25:22.065551  276743 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 01:25:22.065640  276743 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:25:22.102156  276743 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 01:25:22.102183  276743 containerd.go:534] Images already preloaded, skipping extraction
	I1212 01:25:22.102250  276743 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:25:22.129883  276743 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 01:25:22.129908  276743 cache_images.go:86] Images are preloaded, skipping loading
	I1212 01:25:22.129916  276743 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1212 01:25:22.130003  276743 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-256959 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:25:22.130072  276743 ssh_runner.go:195] Run: sudo crictl info
	I1212 01:25:22.157382  276743 cni.go:84] Creating CNI manager for ""
	I1212 01:25:22.157407  276743 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 01:25:22.157422  276743 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1212 01:25:22.157449  276743 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-256959 NodeName:newest-cni-256959 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 01:25:22.157566  276743 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-256959"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:25:22.157640  276743 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 01:25:22.165592  276743 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 01:25:22.165664  276743 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:25:22.173544  276743 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1212 01:25:22.186913  276743 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 01:25:22.199980  276743 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1212 01:25:22.212497  276743 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1212 01:25:22.216212  276743 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:25:22.226129  276743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:25:22.341565  276743 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:25:22.362735  276743 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959 for IP: 192.168.76.2
	I1212 01:25:22.362758  276743 certs.go:195] generating shared ca certs ...
	I1212 01:25:22.362774  276743 certs.go:227] acquiring lock for ca certs: {Name:mk18ed2fce74cbc4ee01c0f71e2dbdd98ccce1cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:25:22.362922  276743 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key
	I1212 01:25:22.362982  276743 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key
	I1212 01:25:22.363063  276743 certs.go:257] generating profile certs ...
	I1212 01:25:22.363128  276743 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/client.key
	I1212 01:25:22.363145  276743 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/client.crt with IP's: []
	I1212 01:25:23.043220  276743 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/client.crt ...
	I1212 01:25:23.043305  276743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/client.crt: {Name:mke800b4895a7f26c3f61118ac2a9636e3a9248a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:25:23.043557  276743 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/client.key ...
	I1212 01:25:23.043596  276743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/client.key: {Name:mkb2206776a08341de5b9d37086d859f3539aa54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:25:23.043743  276743 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.key.b05ecb93
	I1212 01:25:23.043783  276743 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.crt.b05ecb93 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1212 01:25:23.163980  276743 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.crt.b05ecb93 ...
	I1212 01:25:23.164017  276743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.crt.b05ecb93: {Name:mk05b9dd6b8930af6580fe78d40e6026f3e8847a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:25:23.164237  276743 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.key.b05ecb93 ...
	I1212 01:25:23.164254  276743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.key.b05ecb93: {Name:mk5e2ac6bbc37c39d5b319f8600a5d25e63c4a12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:25:23.164355  276743 certs.go:382] copying /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.crt.b05ecb93 -> /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.crt
	I1212 01:25:23.164449  276743 certs.go:386] copying /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.key.b05ecb93 -> /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.key
	I1212 01:25:23.164518  276743 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.key
	I1212 01:25:23.164541  276743 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.crt with IP's: []
	I1212 01:25:23.503416  276743 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.crt ...
	I1212 01:25:23.503453  276743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.crt: {Name:mka5a6a7cee07eb7c969d496d8aa380d667ba867 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:25:23.503635  276743 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.key ...
	I1212 01:25:23.503652  276743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.key: {Name:mkd0b1a9e86a7f90668157e83a73d06f56064ece Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:25:23.503848  276743 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem (1338 bytes)
	W1212 01:25:23.503900  276743 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290_empty.pem, impossibly tiny 0 bytes
	I1212 01:25:23.503913  276743 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 01:25:23.503965  276743 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem (1082 bytes)
	I1212 01:25:23.503999  276743 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:25:23.504032  276743 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem (1675 bytes)
	I1212 01:25:23.504080  276743 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem (1708 bytes)
	I1212 01:25:23.504696  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:25:23.524669  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 01:25:23.551586  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:25:23.572683  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:25:23.594234  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 01:25:23.612544  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 01:25:23.629869  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:25:23.647023  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 01:25:23.664698  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /usr/share/ca-certificates/42902.pem (1708 bytes)
	I1212 01:25:23.682123  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:25:23.699502  276743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem --> /usr/share/ca-certificates/4290.pem (1338 bytes)
	I1212 01:25:23.716689  276743 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:25:23.729602  276743 ssh_runner.go:195] Run: openssl version
	I1212 01:25:23.735703  276743 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4290.pem
	I1212 01:25:23.742851  276743 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4290.pem /etc/ssl/certs/4290.pem
	I1212 01:25:23.750077  276743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4290.pem
	I1212 01:25:23.753690  276743 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:06 /usr/share/ca-certificates/4290.pem
	I1212 01:25:23.753758  276743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4290.pem
	I1212 01:25:23.795206  276743 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 01:25:23.802639  276743 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4290.pem /etc/ssl/certs/51391683.0
	I1212 01:25:23.809951  276743 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42902.pem
	I1212 01:25:23.817139  276743 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42902.pem /etc/ssl/certs/42902.pem
	I1212 01:25:23.830841  276743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42902.pem
	I1212 01:25:23.834926  276743 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:06 /usr/share/ca-certificates/42902.pem
	I1212 01:25:23.835070  276743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42902.pem
	I1212 01:25:23.876422  276743 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 01:25:23.885502  276743 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/42902.pem /etc/ssl/certs/3ec20f2e.0
	I1212 01:25:23.893037  276743 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:25:23.900652  276743 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 01:25:23.908635  276743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:25:23.912614  276743 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:25:23.912690  276743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:25:23.956401  276743 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 01:25:23.964299  276743 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 01:25:23.971681  276743 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:25:23.975558  276743 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 01:25:23.975609  276743 kubeadm.go:401] StartCluster: {Name:newest-cni-256959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:25:23.975697  276743 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1212 01:25:23.975759  276743 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:25:24.003976  276743 cri.go:89] found id: ""
	I1212 01:25:24.004073  276743 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:25:24.014227  276743 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:25:24.022866  276743 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 01:25:24.022958  276743 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:25:24.031328  276743 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:25:24.031359  276743 kubeadm.go:158] found existing configuration files:
	
	I1212 01:25:24.031425  276743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:25:24.039632  276743 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:25:24.039710  276743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:25:24.047426  276743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:25:24.055269  276743 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:25:24.055386  276743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:25:24.062906  276743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:25:24.070757  276743 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:25:24.070846  276743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:25:24.078322  276743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:25:24.086235  276743 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:25:24.086340  276743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:25:24.093978  276743 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 01:25:24.130495  276743 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 01:25:24.130556  276743 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 01:25:24.204494  276743 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 01:25:24.204576  276743 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 01:25:24.204617  276743 kubeadm.go:319] OS: Linux
	I1212 01:25:24.204667  276743 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 01:25:24.204719  276743 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 01:25:24.204770  276743 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 01:25:24.204821  276743 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 01:25:24.204871  276743 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 01:25:24.204928  276743 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 01:25:24.204978  276743 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 01:25:24.205039  276743 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 01:25:24.205089  276743 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 01:25:24.274059  276743 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:25:24.274248  276743 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:25:24.274393  276743 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:25:24.281432  276743 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:25:24.288265  276743 out.go:252]   - Generating certificates and keys ...
	I1212 01:25:24.288438  276743 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 01:25:24.288544  276743 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 01:25:24.872395  276743 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 01:25:24.948048  276743 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 01:25:25.302518  276743 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 01:25:25.648856  276743 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 01:25:25.789938  276743 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 01:25:25.790397  276743 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-256959] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1212 01:25:26.099340  276743 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 01:25:26.099559  276743 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-256959] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1212 01:25:26.538607  276743 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 01:25:27.389042  276743 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 01:25:27.842473  276743 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 01:25:27.842877  276743 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:25:27.936371  276743 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:25:28.210661  276743 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 01:25:28.314836  276743 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:25:28.428208  276743 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:25:28.580595  276743 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:25:28.581418  276743 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:25:28.584199  276743 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:25:28.587820  276743 out.go:252]   - Booting up control plane ...
	I1212 01:25:28.587929  276743 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:25:28.588012  276743 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:25:28.589356  276743 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:25:28.605527  276743 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:25:28.605678  276743 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 01:25:28.613455  276743 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 01:25:28.614074  276743 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:25:28.614240  276743 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 01:25:28.755452  276743 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 01:25:28.755580  276743 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 01:27:19.226422  268396 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000962088s
	I1212 01:27:19.226635  268396 kubeadm.go:319] 
	I1212 01:27:19.226702  268396 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 01:27:19.226735  268396 kubeadm.go:319] 	- The kubelet is not running
	I1212 01:27:19.226840  268396 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 01:27:19.226847  268396 kubeadm.go:319] 
	I1212 01:27:19.227012  268396 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 01:27:19.227062  268396 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 01:27:19.227095  268396 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 01:27:19.227099  268396 kubeadm.go:319] 
	I1212 01:27:19.231490  268396 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 01:27:19.231948  268396 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 01:27:19.232070  268396 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:27:19.232304  268396 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 01:27:19.232318  268396 kubeadm.go:319] 
	W1212 01:27:19.232506  268396 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-361053] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-361053] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000962088s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 01:27:19.232600  268396 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1212 01:27:19.232891  268396 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 01:27:19.641819  268396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:27:19.655717  268396 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 01:27:19.655786  268396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:27:19.664059  268396 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:27:19.664079  268396 kubeadm.go:158] found existing configuration files:
	
	I1212 01:27:19.664128  268396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:27:19.672510  268396 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:27:19.672575  268396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:27:19.680342  268396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:27:19.688315  268396 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:27:19.688383  268396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:27:19.696209  268396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:27:19.704155  268396 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:27:19.704219  268396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:27:19.711899  268396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:27:19.719844  268396 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:27:19.719910  268396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:27:19.727687  268396 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 01:27:19.860959  268396 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 01:27:19.861382  268396 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 01:27:19.927748  268396 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:29:28.751512  276743 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000466498s
	I1212 01:29:28.751546  276743 kubeadm.go:319] 
	I1212 01:29:28.751605  276743 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 01:29:28.751644  276743 kubeadm.go:319] 	- The kubelet is not running
	I1212 01:29:28.751765  276743 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 01:29:28.751774  276743 kubeadm.go:319] 
	I1212 01:29:28.751883  276743 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 01:29:28.751919  276743 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 01:29:28.751963  276743 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 01:29:28.751972  276743 kubeadm.go:319] 
	I1212 01:29:28.757988  276743 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 01:29:28.758457  276743 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 01:29:28.758593  276743 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:29:28.759136  276743 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 01:29:28.759148  276743 kubeadm.go:319] 
	I1212 01:29:28.759296  276743 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1212 01:29:28.759448  276743 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-256959] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-256959] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000466498s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 01:29:28.759537  276743 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1212 01:29:29.171145  276743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:29:29.184061  276743 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 01:29:29.184150  276743 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:29:29.191792  276743 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:29:29.191813  276743 kubeadm.go:158] found existing configuration files:
	
	I1212 01:29:29.191872  276743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:29:29.199430  276743 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:29:29.199502  276743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:29:29.206493  276743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:29:29.213869  276743 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:29:29.213974  276743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:29:29.221146  276743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:29:29.228771  276743 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:29:29.228848  276743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:29:29.236019  276743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:29:29.243394  276743 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:29:29.243513  276743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:29:29.250760  276743 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 01:29:29.289424  276743 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 01:29:29.289525  276743 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 01:29:29.367460  276743 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 01:29:29.367532  276743 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 01:29:29.367572  276743 kubeadm.go:319] OS: Linux
	I1212 01:29:29.367620  276743 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 01:29:29.367668  276743 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 01:29:29.367716  276743 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 01:29:29.367765  276743 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 01:29:29.367814  276743 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 01:29:29.367862  276743 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 01:29:29.367907  276743 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 01:29:29.367956  276743 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 01:29:29.368003  276743 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 01:29:29.435977  276743 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:29:29.436136  276743 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:29:29.436234  276743 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:29:29.447414  276743 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:29:29.452711  276743 out.go:252]   - Generating certificates and keys ...
	I1212 01:29:29.452896  276743 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 01:29:29.452999  276743 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 01:29:29.453121  276743 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:29:29.453232  276743 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:29:29.453362  276743 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:29:29.453468  276743 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 01:29:29.453582  276743 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:29:29.453693  276743 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:29:29.453811  276743 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:29:29.453920  276743 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:29:29.453981  276743 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 01:29:29.454074  276743 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:29:29.661293  276743 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:29:29.926167  276743 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 01:29:30.228322  276743 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:29:30.325953  276743 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:29:30.468055  276743 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:29:30.469327  276743 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:29:30.473394  276743 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:29:30.478856  276743 out.go:252]   - Booting up control plane ...
	I1212 01:29:30.478958  276743 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:29:30.479046  276743 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:29:30.479115  276743 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:29:30.498715  276743 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:29:30.498819  276743 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 01:29:30.506278  276743 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 01:29:30.506595  276743 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:29:30.506638  276743 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 01:29:30.667439  276743 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 01:29:30.667560  276743 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 01:31:22.255083  268396 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 01:31:22.255118  268396 kubeadm.go:319] 
	I1212 01:31:22.255185  268396 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 01:31:22.259224  268396 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 01:31:22.259291  268396 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 01:31:22.259384  268396 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 01:31:22.259445  268396 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 01:31:22.259485  268396 kubeadm.go:319] OS: Linux
	I1212 01:31:22.259534  268396 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 01:31:22.259586  268396 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 01:31:22.259638  268396 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 01:31:22.259689  268396 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 01:31:22.259742  268396 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 01:31:22.259793  268396 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 01:31:22.259842  268396 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 01:31:22.259894  268396 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 01:31:22.259943  268396 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 01:31:22.260016  268396 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:31:22.260113  268396 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:31:22.260208  268396 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:31:22.260274  268396 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:31:22.264965  268396 out.go:252]   - Generating certificates and keys ...
	I1212 01:31:22.265061  268396 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 01:31:22.265129  268396 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 01:31:22.265205  268396 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:31:22.265267  268396 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:31:22.265335  268396 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:31:22.265389  268396 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 01:31:22.265452  268396 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:31:22.265511  268396 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:31:22.265581  268396 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:31:22.265657  268396 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:31:22.265698  268396 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 01:31:22.265754  268396 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:31:22.265805  268396 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:31:22.265863  268396 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 01:31:22.265922  268396 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:31:22.265985  268396 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:31:22.266040  268396 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:31:22.266122  268396 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:31:22.266188  268396 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:31:22.269011  268396 out.go:252]   - Booting up control plane ...
	I1212 01:31:22.269113  268396 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:31:22.269196  268396 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:31:22.269313  268396 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:31:22.269458  268396 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:31:22.269587  268396 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 01:31:22.269697  268396 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 01:31:22.269820  268396 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:31:22.269866  268396 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 01:31:22.270050  268396 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 01:31:22.270170  268396 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 01:31:22.270256  268396 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001218388s
	I1212 01:31:22.270267  268396 kubeadm.go:319] 
	I1212 01:31:22.270326  268396 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 01:31:22.270369  268396 kubeadm.go:319] 	- The kubelet is not running
	I1212 01:31:22.270483  268396 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 01:31:22.270503  268396 kubeadm.go:319] 
	I1212 01:31:22.270616  268396 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 01:31:22.270657  268396 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 01:31:22.270717  268396 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 01:31:22.270757  268396 kubeadm.go:319] 
	I1212 01:31:22.270858  268396 kubeadm.go:403] duration metric: took 8m7.867624823s to StartCluster
	I1212 01:31:22.270898  268396 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:31:22.270968  268396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:31:22.306968  268396 cri.go:89] found id: ""
	I1212 01:31:22.307036  268396 logs.go:282] 0 containers: []
	W1212 01:31:22.307047  268396 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:31:22.307054  268396 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:31:22.307137  268396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:31:22.339653  268396 cri.go:89] found id: ""
	I1212 01:31:22.339689  268396 logs.go:282] 0 containers: []
	W1212 01:31:22.339700  268396 logs.go:284] No container was found matching "etcd"
	I1212 01:31:22.339706  268396 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:31:22.339765  268396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:31:22.368586  268396 cri.go:89] found id: ""
	I1212 01:31:22.368607  268396 logs.go:282] 0 containers: []
	W1212 01:31:22.368615  268396 logs.go:284] No container was found matching "coredns"
	I1212 01:31:22.368621  268396 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:31:22.368680  268396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:31:22.393839  268396 cri.go:89] found id: ""
	I1212 01:31:22.393912  268396 logs.go:282] 0 containers: []
	W1212 01:31:22.393934  268396 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:31:22.393960  268396 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:31:22.394035  268396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:31:22.419583  268396 cri.go:89] found id: ""
	I1212 01:31:22.419608  268396 logs.go:282] 0 containers: []
	W1212 01:31:22.419616  268396 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:31:22.419622  268396 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:31:22.419680  268396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:31:22.448415  268396 cri.go:89] found id: ""
	I1212 01:31:22.448443  268396 logs.go:282] 0 containers: []
	W1212 01:31:22.448451  268396 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:31:22.448459  268396 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:31:22.448517  268396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:31:22.476913  268396 cri.go:89] found id: ""
	I1212 01:31:22.476939  268396 logs.go:282] 0 containers: []
	W1212 01:31:22.476947  268396 logs.go:284] No container was found matching "kindnet"
	I1212 01:31:22.476956  268396 logs.go:123] Gathering logs for kubelet ...
	I1212 01:31:22.476983  268396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:31:22.533409  268396 logs.go:123] Gathering logs for dmesg ...
	I1212 01:31:22.533444  268396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:31:22.548368  268396 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:31:22.548401  268396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:31:22.614148  268396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:31:22.605942    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:22.606490    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:22.608232    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:22.608633    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:22.610124    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:31:22.605942    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:22.606490    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:22.608232    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:22.608633    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:31:22.610124    5449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:31:22.614173  268396 logs.go:123] Gathering logs for containerd ...
	I1212 01:31:22.614185  268396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:31:22.656511  268396 logs.go:123] Gathering logs for container status ...
	I1212 01:31:22.656543  268396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:31:22.687238  268396 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001218388s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 01:31:22.687348  268396 out.go:285] * 
	W1212 01:31:22.687426  268396 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001218388s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 01:31:22.687441  268396 out.go:285] * 
	W1212 01:31:22.689841  268396 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 01:31:22.695133  268396 out.go:203] 
	W1212 01:31:22.698069  268396 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001218388s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 01:31:22.698114  268396 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 01:31:22.698136  268396 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 01:31:22.701165  268396 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 12 01:23:04 no-preload-361053 containerd[760]: time="2025-12-12T01:23:04.384035480Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:05 no-preload-361053 containerd[760]: time="2025-12-12T01:23:05.373522512Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 12 01:23:05 no-preload-361053 containerd[760]: time="2025-12-12T01:23:05.375857443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 12 01:23:05 no-preload-361053 containerd[760]: time="2025-12-12T01:23:05.392808161Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:05 no-preload-361053 containerd[760]: time="2025-12-12T01:23:05.393494461Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:06 no-preload-361053 containerd[760]: time="2025-12-12T01:23:06.469307146Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 12 01:23:06 no-preload-361053 containerd[760]: time="2025-12-12T01:23:06.471770579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 12 01:23:06 no-preload-361053 containerd[760]: time="2025-12-12T01:23:06.487399898Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:06 no-preload-361053 containerd[760]: time="2025-12-12T01:23:06.488294224Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:07 no-preload-361053 containerd[760]: time="2025-12-12T01:23:07.584315785Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 12 01:23:07 no-preload-361053 containerd[760]: time="2025-12-12T01:23:07.586361959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 12 01:23:07 no-preload-361053 containerd[760]: time="2025-12-12T01:23:07.593980543Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:07 no-preload-361053 containerd[760]: time="2025-12-12T01:23:07.594678428Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:09 no-preload-361053 containerd[760]: time="2025-12-12T01:23:09.125818664Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 12 01:23:09 no-preload-361053 containerd[760]: time="2025-12-12T01:23:09.128180286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 12 01:23:09 no-preload-361053 containerd[760]: time="2025-12-12T01:23:09.138535463Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:09 no-preload-361053 containerd[760]: time="2025-12-12T01:23:09.139822900Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:10 no-preload-361053 containerd[760]: time="2025-12-12T01:23:10.221236720Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 12 01:23:10 no-preload-361053 containerd[760]: time="2025-12-12T01:23:10.223326176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 12 01:23:10 no-preload-361053 containerd[760]: time="2025-12-12T01:23:10.237305395Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:10 no-preload-361053 containerd[760]: time="2025-12-12T01:23:10.238695242Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:10 no-preload-361053 containerd[760]: time="2025-12-12T01:23:10.594370471Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 12 01:23:10 no-preload-361053 containerd[760]: time="2025-12-12T01:23:10.596617443Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 12 01:23:10 no-preload-361053 containerd[760]: time="2025-12-12T01:23:10.603750918Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 12 01:23:10 no-preload-361053 containerd[760]: time="2025-12-12T01:23:10.604059262Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:33:08.395142    6813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:33:08.395694    6813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:33:08.397161    6813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:33:08.397475    6813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:33:08.398912    6813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec11 23:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014465] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.504479] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.038126] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.726220] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.947343] kauditd_printk_skb: 36 callbacks suppressed
	[Dec12 00:40] hrtimer: interrupt took 11339963 ns
	
	
	==> kernel <==
	 01:33:08 up  2:15,  0 user,  load average: 0.30, 1.00, 1.77
	Linux no-preload-361053 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 01:33:05 no-preload-361053 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:33:05 no-preload-361053 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 458.
	Dec 12 01:33:05 no-preload-361053 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:33:05 no-preload-361053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:33:05 no-preload-361053 kubelet[6693]: E1212 01:33:05.834182    6693 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:33:05 no-preload-361053 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:33:05 no-preload-361053 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:33:06 no-preload-361053 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 459.
	Dec 12 01:33:06 no-preload-361053 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:33:06 no-preload-361053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:33:06 no-preload-361053 kubelet[6698]: E1212 01:33:06.576957    6698 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:33:06 no-preload-361053 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:33:06 no-preload-361053 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:33:07 no-preload-361053 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 460.
	Dec 12 01:33:07 no-preload-361053 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:33:07 no-preload-361053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:33:07 no-preload-361053 kubelet[6709]: E1212 01:33:07.365498    6709 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:33:07 no-preload-361053 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:33:07 no-preload-361053 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:33:08 no-preload-361053 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 461.
	Dec 12 01:33:08 no-preload-361053 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:33:08 no-preload-361053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:33:08 no-preload-361053 kubelet[6736]: E1212 01:33:08.080633    6736 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:33:08 no-preload-361053 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:33:08 no-preload-361053 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-361053 -n no-preload-361053
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-361053 -n no-preload-361053: exit status 6 (363.62984ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 01:33:08.889442  286910 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-361053" does not appear in /home/jenkins/minikube-integration/22101-2343/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-361053" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (101.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (371.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-361053 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p no-preload-361053 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 80 (6m8.17335056s)

                                                
                                                
-- stdout --
	* [no-preload-361053] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22101
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "no-preload-361053" primary control-plane node in "no-preload-361053" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 01:33:10.429459  287206 out.go:360] Setting OutFile to fd 1 ...
	I1212 01:33:10.429581  287206 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:33:10.429595  287206 out.go:374] Setting ErrFile to fd 2...
	I1212 01:33:10.429600  287206 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:33:10.429856  287206 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	I1212 01:33:10.430230  287206 out.go:368] Setting JSON to false
	I1212 01:33:10.431163  287206 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8137,"bootTime":1765495054,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1212 01:33:10.431230  287206 start.go:143] virtualization:  
	I1212 01:33:10.434281  287206 out.go:179] * [no-preload-361053] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 01:33:10.438251  287206 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 01:33:10.438392  287206 notify.go:221] Checking for updates...
	I1212 01:33:10.444185  287206 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 01:33:10.447214  287206 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 01:33:10.450100  287206 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	I1212 01:33:10.452984  287206 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 01:33:10.455808  287206 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 01:33:10.459169  287206 config.go:182] Loaded profile config "no-preload-361053": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 01:33:10.459786  287206 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 01:33:10.491859  287206 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 01:33:10.491978  287206 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:33:10.546591  287206 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 01:33:10.536325619 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:33:10.546711  287206 docker.go:319] overlay module found
	I1212 01:33:10.549899  287206 out.go:179] * Using the docker driver based on existing profile
	I1212 01:33:10.552847  287206 start.go:309] selected driver: docker
	I1212 01:33:10.552889  287206 start.go:927] validating driver "docker" against &{Name:no-preload-361053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-361053 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:33:10.552995  287206 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 01:33:10.553716  287206 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:33:10.609060  287206 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 01:33:10.599832814 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:33:10.609400  287206 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 01:33:10.609433  287206 cni.go:84] Creating CNI manager for ""
	I1212 01:33:10.609483  287206 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 01:33:10.609530  287206 start.go:353] cluster config:
	{Name:no-preload-361053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-361053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:33:10.614478  287206 out.go:179] * Starting "no-preload-361053" primary control-plane node in "no-preload-361053" cluster
	I1212 01:33:10.617235  287206 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1212 01:33:10.620106  287206 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 01:33:10.622869  287206 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 01:33:10.622947  287206 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 01:33:10.623042  287206 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/config.json ...
	I1212 01:33:10.623355  287206 cache.go:107] acquiring lock: {Name:mk86e2a34ccf063d967d1b885c7693629a6b1892 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:33:10.623437  287206 cache.go:115] /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1212 01:33:10.623451  287206 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 115.784µs
	I1212 01:33:10.623465  287206 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1212 01:33:10.623481  287206 cache.go:107] acquiring lock: {Name:mk5046428d0406b9fe0bac2e28c1f5cc3958499f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:33:10.623518  287206 cache.go:115] /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1212 01:33:10.623527  287206 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 47.795µs
	I1212 01:33:10.623533  287206 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1212 01:33:10.623546  287206 cache.go:107] acquiring lock: {Name:mkc4887793edcc3c6296024b677e69f6ec1f79f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:33:10.623586  287206 cache.go:115] /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1212 01:33:10.623594  287206 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 49.322µs
	I1212 01:33:10.623600  287206 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1212 01:33:10.623610  287206 cache.go:107] acquiring lock: {Name:mkeb49560acf33aa79e308e0b71177927ef617d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:33:10.623642  287206 cache.go:115] /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1212 01:33:10.623650  287206 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 41.412µs
	I1212 01:33:10.623656  287206 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1212 01:33:10.623665  287206 cache.go:107] acquiring lock: {Name:mk2f0a11f2d527d62eb30e98e76f3a359773886b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:33:10.623691  287206 cache.go:115] /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1212 01:33:10.623696  287206 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 31.763µs
	I1212 01:33:10.623707  287206 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1212 01:33:10.623716  287206 cache.go:107] acquiring lock: {Name:mkf75c8f281a4d7578645f330ed9cc6bf48ab550 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:33:10.623747  287206 cache.go:115] /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1212 01:33:10.623755  287206 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 40.37µs
	I1212 01:33:10.623761  287206 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1212 01:33:10.623772  287206 cache.go:107] acquiring lock: {Name:mk1d6384b2d8bd32efb0f4661eaa55ecd74d4b80 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:33:10.623803  287206 cache.go:115] /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1212 01:33:10.623812  287206 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 42.807µs
	I1212 01:33:10.623817  287206 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1212 01:33:10.623321  287206 cache.go:107] acquiring lock: {Name:mk71cce41032f52f0748ef343d21f16410e3a1fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:33:10.623892  287206 cache.go:115] /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1212 01:33:10.623901  287206 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 595.264µs
	I1212 01:33:10.623907  287206 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1212 01:33:10.623913  287206 cache.go:87] Successfully saved all images to host disk.
	I1212 01:33:10.643214  287206 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 01:33:10.643238  287206 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 01:33:10.643258  287206 cache.go:243] Successfully downloaded all kic artifacts
	I1212 01:33:10.643289  287206 start.go:360] acquireMachinesLock for no-preload-361053: {Name:mk154c67822339b116aad3ea851214e3043755e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:33:10.643359  287206 start.go:364] duration metric: took 48.558µs to acquireMachinesLock for "no-preload-361053"
	I1212 01:33:10.643382  287206 start.go:96] Skipping create...Using existing machine configuration
	I1212 01:33:10.643393  287206 fix.go:54] fixHost starting: 
	I1212 01:33:10.643654  287206 cli_runner.go:164] Run: docker container inspect no-preload-361053 --format={{.State.Status}}
	I1212 01:33:10.661405  287206 fix.go:112] recreateIfNeeded on no-preload-361053: state=Stopped err=<nil>
	W1212 01:33:10.661436  287206 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 01:33:10.664651  287206 out.go:252] * Restarting existing docker container for "no-preload-361053" ...
	I1212 01:33:10.664755  287206 cli_runner.go:164] Run: docker start no-preload-361053
	I1212 01:33:10.948880  287206 cli_runner.go:164] Run: docker container inspect no-preload-361053 --format={{.State.Status}}
	I1212 01:33:10.974106  287206 kic.go:430] container "no-preload-361053" state is running.
	I1212 01:33:10.974585  287206 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-361053
	I1212 01:33:10.995294  287206 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/config.json ...
	I1212 01:33:10.995534  287206 machine.go:94] provisionDockerMachine start ...
	I1212 01:33:10.995608  287206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-361053
	I1212 01:33:11.019191  287206 main.go:143] libmachine: Using SSH client type: native
	I1212 01:33:11.019517  287206 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1212 01:33:11.019526  287206 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 01:33:11.020659  287206 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 01:33:14.170473  287206 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-361053
	
	I1212 01:33:14.170498  287206 ubuntu.go:182] provisioning hostname "no-preload-361053"
	I1212 01:33:14.170559  287206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-361053
	I1212 01:33:14.188567  287206 main.go:143] libmachine: Using SSH client type: native
	I1212 01:33:14.188886  287206 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1212 01:33:14.188903  287206 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-361053 && echo "no-preload-361053" | sudo tee /etc/hostname
	I1212 01:33:14.348144  287206 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-361053
	
	I1212 01:33:14.348281  287206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-361053
	I1212 01:33:14.367391  287206 main.go:143] libmachine: Using SSH client type: native
	I1212 01:33:14.367704  287206 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1212 01:33:14.367719  287206 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-361053' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-361053/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-361053' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:33:14.519558  287206 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:33:14.519628  287206 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-2343/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-2343/.minikube}
	I1212 01:33:14.519686  287206 ubuntu.go:190] setting up certificates
	I1212 01:33:14.519722  287206 provision.go:84] configureAuth start
	I1212 01:33:14.519802  287206 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-361053
	I1212 01:33:14.543680  287206 provision.go:143] copyHostCerts
	I1212 01:33:14.543759  287206 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem, removing ...
	I1212 01:33:14.543768  287206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem
	I1212 01:33:14.543857  287206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem (1082 bytes)
	I1212 01:33:14.543983  287206 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem, removing ...
	I1212 01:33:14.543989  287206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem
	I1212 01:33:14.544018  287206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem (1123 bytes)
	I1212 01:33:14.544096  287206 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem, removing ...
	I1212 01:33:14.544103  287206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem
	I1212 01:33:14.544130  287206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem (1675 bytes)
	I1212 01:33:14.544187  287206 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem org=jenkins.no-preload-361053 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-361053]
	I1212 01:33:14.844647  287206 provision.go:177] copyRemoteCerts
	I1212 01:33:14.844713  287206 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:33:14.844788  287206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-361053
	I1212 01:33:14.862571  287206 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/no-preload-361053/id_rsa Username:docker}
	I1212 01:33:14.966655  287206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 01:33:14.983728  287206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 01:33:15.000842  287206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 01:33:15.029620  287206 provision.go:87] duration metric: took 509.857308ms to configureAuth
	I1212 01:33:15.029672  287206 ubuntu.go:206] setting minikube options for container-runtime
	I1212 01:33:15.029880  287206 config.go:182] Loaded profile config "no-preload-361053": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 01:33:15.029895  287206 machine.go:97] duration metric: took 4.034345397s to provisionDockerMachine
	I1212 01:33:15.029904  287206 start.go:293] postStartSetup for "no-preload-361053" (driver="docker")
	I1212 01:33:15.029919  287206 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:33:15.029980  287206 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:33:15.030125  287206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-361053
	I1212 01:33:15.050338  287206 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/no-preload-361053/id_rsa Username:docker}
	I1212 01:33:15.155159  287206 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:33:15.158821  287206 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 01:33:15.158851  287206 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 01:33:15.158882  287206 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/addons for local assets ...
	I1212 01:33:15.159031  287206 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/files for local assets ...
	I1212 01:33:15.159139  287206 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem -> 42902.pem in /etc/ssl/certs
	I1212 01:33:15.159244  287206 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:33:15.167777  287206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /etc/ssl/certs/42902.pem (1708 bytes)
	I1212 01:33:15.188093  287206 start.go:296] duration metric: took 158.172096ms for postStartSetup
	I1212 01:33:15.188178  287206 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 01:33:15.188223  287206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-361053
	I1212 01:33:15.205702  287206 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/no-preload-361053/id_rsa Username:docker}
	I1212 01:33:15.308942  287206 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 01:33:15.313983  287206 fix.go:56] duration metric: took 4.670584581s for fixHost
	I1212 01:33:15.314011  287206 start.go:83] releasing machines lock for "no-preload-361053", held for 4.670641336s
	I1212 01:33:15.314079  287206 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-361053
	I1212 01:33:15.332761  287206 ssh_runner.go:195] Run: cat /version.json
	I1212 01:33:15.332818  287206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-361053
	I1212 01:33:15.333070  287206 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:33:15.333129  287206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-361053
	I1212 01:33:15.357886  287206 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/no-preload-361053/id_rsa Username:docker}
	I1212 01:33:15.373191  287206 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/no-preload-361053/id_rsa Username:docker}
	I1212 01:33:15.462718  287206 ssh_runner.go:195] Run: systemctl --version
	I1212 01:33:15.559571  287206 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:33:15.564162  287206 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:33:15.564271  287206 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:33:15.572295  287206 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 01:33:15.572323  287206 start.go:496] detecting cgroup driver to use...
	I1212 01:33:15.572376  287206 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 01:33:15.572457  287206 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 01:33:15.590265  287206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 01:33:15.603931  287206 docker.go:218] disabling cri-docker service (if available) ...
	I1212 01:33:15.604040  287206 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:33:15.619709  287206 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:33:15.633120  287206 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:33:15.745120  287206 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:33:15.856267  287206 docker.go:234] disabling docker service ...
	I1212 01:33:15.856362  287206 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:33:15.872142  287206 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:33:15.885538  287206 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:33:16.007318  287206 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:33:16.145250  287206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:33:16.158078  287206 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:33:16.173659  287206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 01:33:16.183387  287206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 01:33:16.192439  287206 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 01:33:16.192510  287206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 01:33:16.201771  287206 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 01:33:16.210383  287206 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 01:33:16.219183  287206 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 01:33:16.227825  287206 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:33:16.236204  287206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 01:33:16.245075  287206 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 01:33:16.253975  287206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 01:33:16.263051  287206 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:33:16.271105  287206 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:33:16.278773  287206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:33:16.395685  287206 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 01:33:16.502787  287206 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1212 01:33:16.502918  287206 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1212 01:33:16.506854  287206 start.go:564] Will wait 60s for crictl version
	I1212 01:33:16.506959  287206 ssh_runner.go:195] Run: which crictl
	I1212 01:33:16.510418  287206 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 01:33:16.536180  287206 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1212 01:33:16.536315  287206 ssh_runner.go:195] Run: containerd --version
	I1212 01:33:16.557674  287206 ssh_runner.go:195] Run: containerd --version
	I1212 01:33:16.585134  287206 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1212 01:33:16.587946  287206 cli_runner.go:164] Run: docker network inspect no-preload-361053 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 01:33:16.609867  287206 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1212 01:33:16.613918  287206 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:33:16.623744  287206 kubeadm.go:884] updating cluster {Name:no-preload-361053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-361053 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:33:16.623857  287206 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 01:33:16.623916  287206 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:33:16.650653  287206 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 01:33:16.650674  287206 cache_images.go:86] Images are preloaded, skipping loading
	I1212 01:33:16.650681  287206 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1212 01:33:16.650792  287206 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-361053 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-361053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:33:16.650867  287206 ssh_runner.go:195] Run: sudo crictl info
	I1212 01:33:16.676358  287206 cni.go:84] Creating CNI manager for ""
	I1212 01:33:16.676391  287206 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 01:33:16.676434  287206 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 01:33:16.676473  287206 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-361053 NodeName:no-preload-361053 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 01:33:16.676614  287206 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-361053"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:33:16.676692  287206 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 01:33:16.684549  287206 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 01:33:16.684631  287206 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:33:16.692278  287206 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1212 01:33:16.704678  287206 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 01:33:16.717453  287206 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1212 01:33:16.730349  287206 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1212 01:33:16.733792  287206 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:33:16.743217  287206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:33:16.879123  287206 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:33:16.896403  287206 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053 for IP: 192.168.85.2
	I1212 01:33:16.896424  287206 certs.go:195] generating shared ca certs ...
	I1212 01:33:16.896440  287206 certs.go:227] acquiring lock for ca certs: {Name:mk18ed2fce74cbc4ee01c0f71e2dbdd98ccce1cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:33:16.896611  287206 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key
	I1212 01:33:16.896673  287206 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key
	I1212 01:33:16.896685  287206 certs.go:257] generating profile certs ...
	I1212 01:33:16.896802  287206 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/client.key
	I1212 01:33:16.896884  287206 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/apiserver.key.40e68572
	I1212 01:33:16.896936  287206 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/proxy-client.key
	I1212 01:33:16.897085  287206 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem (1338 bytes)
	W1212 01:33:16.897122  287206 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290_empty.pem, impossibly tiny 0 bytes
	I1212 01:33:16.897140  287206 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 01:33:16.897182  287206 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem (1082 bytes)
	I1212 01:33:16.897211  287206 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:33:16.897253  287206 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem (1675 bytes)
	I1212 01:33:16.897323  287206 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem (1708 bytes)
	I1212 01:33:16.898045  287206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:33:16.917558  287206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 01:33:16.936420  287206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:33:16.954703  287206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:33:16.973775  287206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 01:33:16.993771  287206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 01:33:17.013800  287206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:33:17.032752  287206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 01:33:17.050974  287206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /usr/share/ca-certificates/42902.pem (1708 bytes)
	I1212 01:33:17.069067  287206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:33:17.086383  287206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem --> /usr/share/ca-certificates/4290.pem (1338 bytes)
	I1212 01:33:17.103777  287206 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:33:17.116500  287206 ssh_runner.go:195] Run: openssl version
	I1212 01:33:17.123250  287206 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42902.pem
	I1212 01:33:17.130602  287206 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42902.pem /etc/ssl/certs/42902.pem
	I1212 01:33:17.138023  287206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42902.pem
	I1212 01:33:17.141876  287206 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:06 /usr/share/ca-certificates/42902.pem
	I1212 01:33:17.141967  287206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42902.pem
	I1212 01:33:17.183155  287206 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 01:33:17.190531  287206 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:33:17.197720  287206 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 01:33:17.205424  287206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:33:17.209634  287206 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:33:17.209717  287206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:33:17.250661  287206 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 01:33:17.257979  287206 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4290.pem
	I1212 01:33:17.265084  287206 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4290.pem /etc/ssl/certs/4290.pem
	I1212 01:33:17.272550  287206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4290.pem
	I1212 01:33:17.276176  287206 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:06 /usr/share/ca-certificates/4290.pem
	I1212 01:33:17.276244  287206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4290.pem
	I1212 01:33:17.316946  287206 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 01:33:17.324295  287206 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:33:17.327973  287206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 01:33:17.368953  287206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 01:33:17.409868  287206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 01:33:17.453118  287206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 01:33:17.504589  287206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 01:33:17.551985  287206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 01:33:17.599976  287206 kubeadm.go:401] StartCluster: {Name:no-preload-361053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-361053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:33:17.600060  287206 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1212 01:33:17.600116  287206 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:33:17.627743  287206 cri.go:89] found id: ""
	I1212 01:33:17.627848  287206 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:33:17.635686  287206 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 01:33:17.635706  287206 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 01:33:17.635790  287206 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 01:33:17.642948  287206 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 01:33:17.643377  287206 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-361053" does not appear in /home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 01:33:17.643480  287206 kubeconfig.go:62] /home/jenkins/minikube-integration/22101-2343/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-361053" cluster setting kubeconfig missing "no-preload-361053" context setting]
	I1212 01:33:17.643818  287206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/kubeconfig: {Name:mk3e2cbffa089f0a26351f5f6a49b8ed3b66edd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:33:17.645054  287206 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 01:33:17.652754  287206 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1212 01:33:17.652837  287206 kubeadm.go:602] duration metric: took 17.12476ms to restartPrimaryControlPlane
	I1212 01:33:17.652856  287206 kubeadm.go:403] duration metric: took 52.888265ms to StartCluster
	I1212 01:33:17.652873  287206 settings.go:142] acquiring lock: {Name:mk6dd4250df69aeba4752e9f33aeef37272375c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:33:17.652935  287206 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 01:33:17.654183  287206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/kubeconfig: {Name:mk3e2cbffa089f0a26351f5f6a49b8ed3b66edd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:33:17.654577  287206 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1212 01:33:17.656196  287206 config.go:182] Loaded profile config "no-preload-361053": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 01:33:17.656293  287206 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 01:33:17.656907  287206 addons.go:70] Setting storage-provisioner=true in profile "no-preload-361053"
	I1212 01:33:17.656987  287206 addons.go:239] Setting addon storage-provisioner=true in "no-preload-361053"
	I1212 01:33:17.657033  287206 host.go:66] Checking if "no-preload-361053" exists ...
	I1212 01:33:17.657291  287206 addons.go:70] Setting dashboard=true in profile "no-preload-361053"
	I1212 01:33:17.657329  287206 addons.go:239] Setting addon dashboard=true in "no-preload-361053"
	W1212 01:33:17.657367  287206 addons.go:248] addon dashboard should already be in state true
	I1212 01:33:17.657411  287206 host.go:66] Checking if "no-preload-361053" exists ...
	I1212 01:33:17.658033  287206 cli_runner.go:164] Run: docker container inspect no-preload-361053 --format={{.State.Status}}
	I1212 01:33:17.658984  287206 addons.go:70] Setting default-storageclass=true in profile "no-preload-361053"
	I1212 01:33:17.659056  287206 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-361053"
	I1212 01:33:17.659163  287206 out.go:179] * Verifying Kubernetes components...
	I1212 01:33:17.659657  287206 cli_runner.go:164] Run: docker container inspect no-preload-361053 --format={{.State.Status}}
	I1212 01:33:17.659918  287206 cli_runner.go:164] Run: docker container inspect no-preload-361053 --format={{.State.Status}}
	I1212 01:33:17.663168  287206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:33:17.699556  287206 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:33:17.702548  287206 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:33:17.702568  287206 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 01:33:17.702633  287206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-361053
	I1212 01:33:17.707904  287206 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1212 01:33:17.712570  287206 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1212 01:33:17.715424  287206 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1212 01:33:17.715452  287206 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1212 01:33:17.715527  287206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-361053
	I1212 01:33:17.716395  287206 addons.go:239] Setting addon default-storageclass=true in "no-preload-361053"
	I1212 01:33:17.716432  287206 host.go:66] Checking if "no-preload-361053" exists ...
	I1212 01:33:17.716844  287206 cli_runner.go:164] Run: docker container inspect no-preload-361053 --format={{.State.Status}}
	I1212 01:33:17.757307  287206 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/no-preload-361053/id_rsa Username:docker}
	I1212 01:33:17.780041  287206 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 01:33:17.780062  287206 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 01:33:17.780201  287206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-361053
	I1212 01:33:17.787971  287206 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/no-preload-361053/id_rsa Username:docker}
	I1212 01:33:17.824270  287206 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/no-preload-361053/id_rsa Username:docker}
	I1212 01:33:17.914381  287206 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:33:17.932340  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:33:17.963955  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:33:17.997943  287206 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1212 01:33:17.997970  287206 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1212 01:33:18.029336  287206 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1212 01:33:18.029363  287206 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1212 01:33:18.049546  287206 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1212 01:33:18.049613  287206 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1212 01:33:18.063361  287206 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1212 01:33:18.063384  287206 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1212 01:33:18.077187  287206 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1212 01:33:18.077211  287206 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1212 01:33:18.090368  287206 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1212 01:33:18.090397  287206 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1212 01:33:18.104111  287206 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1212 01:33:18.104141  287206 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1212 01:33:18.117846  287206 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1212 01:33:18.117869  287206 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1212 01:33:18.130797  287206 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 01:33:18.130820  287206 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1212 01:33:18.144585  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 01:33:18.535208  287206 node_ready.go:35] waiting up to 6m0s for node "no-preload-361053" to be "Ready" ...
	W1212 01:33:18.535668  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:18.535741  287206 retry.go:31] will retry after 176.168279ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:33:18.535830  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:18.535866  287206 retry.go:31] will retry after 310.631399ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:33:18.536093  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:18.536119  287206 retry.go:31] will retry after 343.133583ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:18.712568  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:33:18.773707  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:18.773739  287206 retry.go:31] will retry after 503.490188ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:18.847154  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:33:18.879640  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:33:18.920064  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:18.920144  287206 retry.go:31] will retry after 545.970645ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:33:18.950800  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:18.950834  287206 retry.go:31] will retry after 319.954632ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:19.271042  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 01:33:19.278476  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:33:19.399940  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:19.399978  287206 retry.go:31] will retry after 290.065244ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:33:19.400038  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:19.400050  287206 retry.go:31] will retry after 299.213835ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:19.466369  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:33:19.524517  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:19.524549  287206 retry.go:31] will retry after 743.245184ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:19.690541  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 01:33:19.700168  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:33:19.757922  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:19.758015  287206 retry.go:31] will retry after 985.188119ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:33:19.779719  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:19.779761  287206 retry.go:31] will retry after 704.931485ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:20.267995  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:33:20.329699  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:20.329775  287206 retry.go:31] will retry after 765.58633ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:20.485196  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:33:20.536023  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:33:20.550357  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:20.550436  287206 retry.go:31] will retry after 1.819808593s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:20.743955  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:33:20.831697  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:20.831734  287206 retry.go:31] will retry after 930.762916ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:21.095851  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:33:21.157009  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:21.157042  287206 retry.go:31] will retry after 1.605590789s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:21.763111  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:33:21.825538  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:21.825574  287206 retry.go:31] will retry after 2.503052767s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:22.370497  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:33:22.431275  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:22.431307  287206 retry.go:31] will retry after 2.355012393s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:22.763437  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:33:22.850160  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:22.850194  287206 retry.go:31] will retry after 1.879850762s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:33:23.035858  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:33:24.329354  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:33:24.389132  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:24.389164  287206 retry.go:31] will retry after 2.014894624s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:24.731243  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:33:24.786964  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:33:24.789370  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:24.789397  287206 retry.go:31] will retry after 4.117004363s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:33:24.843221  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:24.843251  287206 retry.go:31] will retry after 1.752927223s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:33:25.535881  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:33:26.405127  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:33:26.464187  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:26.464220  287206 retry.go:31] will retry after 5.197320965s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:26.596983  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:33:26.656070  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:26.656104  287206 retry.go:31] will retry after 5.533382625s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:33:28.035833  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:33:28.907563  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:33:28.966861  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:28.966896  287206 retry.go:31] will retry after 5.418423295s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:33:30.036974  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:33:31.661672  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:33:31.760812  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:31.760847  287206 retry.go:31] will retry after 8.18905348s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:32.189837  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:33:32.273064  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:32.273096  287206 retry.go:31] will retry after 6.81084135s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:33:32.535806  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:33:34.386064  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:33:34.462330  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:34.462361  287206 retry.go:31] will retry after 6.305262233s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:33:34.536061  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:33:37.035955  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:33:39.036556  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:33:39.084930  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:33:39.143830  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:39.143862  287206 retry.go:31] will retry after 12.343488003s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:39.950184  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:33:40.025519  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:40.025559  287206 retry.go:31] will retry after 5.922815184s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:40.768598  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:33:40.846292  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:40.846323  287206 retry.go:31] will retry after 13.102314865s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:33:41.536492  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:33:43.536575  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:33:45.949469  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:33:46.015241  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:46.015277  287206 retry.go:31] will retry after 13.405032383s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:33:46.036443  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:33:48.036746  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:33:50.536732  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:33:51.488392  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:33:51.559260  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:51.559300  287206 retry.go:31] will retry after 18.362274333s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:33:53.036486  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:33:53.949110  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:33:54.011238  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:54.011276  287206 retry.go:31] will retry after 19.774665037s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:33:55.536392  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:33:57.536557  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:33:59.421135  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:33:59.485322  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:59.485358  287206 retry.go:31] will retry after 11.142105361s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:34:00.038446  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:34:02.536540  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:34:04.536688  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:34:07.036629  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:34:09.536438  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:34:09.921829  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:34:09.985846  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:34:09.985875  287206 retry.go:31] will retry after 18.589744512s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:34:10.627648  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:34:10.686876  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:34:10.686912  287206 retry.go:31] will retry after 19.942061986s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:34:11.536631  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:34:13.787002  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:34:13.855652  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:34:13.855679  287206 retry.go:31] will retry after 16.508119977s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:34:14.036392  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:34:16.036509  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:34:18.036746  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:34:20.535704  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:34:22.536639  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:34:25.036477  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:34:27.536451  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:34:28.576798  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:34:28.636179  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:34:28.636209  287206 retry.go:31] will retry after 29.151273891s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:34:29.536571  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:34:30.364127  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:34:30.423592  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:34:30.423622  287206 retry.go:31] will retry after 42.216578771s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:34:30.629600  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:34:30.691702  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:34:30.691800  287206 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1212 01:34:32.036503  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:34:34.036641  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:34:36.036848  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:34:38.536526  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:34:41.035818  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:34:43.036531  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:34:45.036798  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:34:47.536472  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:34:49.536610  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:34:52.036490  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:34:54.036574  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:34:56.536420  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:34:57.788055  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:34:57.848257  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:34:57.848347  287206 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1212 01:34:59.036628  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:01.535862  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:03.536510  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:06.036291  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:08.036500  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:10.036613  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:12.535792  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:35:12.641222  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:35:12.704850  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:35:12.704951  287206 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 01:35:12.708213  287206 out.go:179] * Enabled addons: 
	I1212 01:35:12.711265  287206 addons.go:530] duration metric: took 1m55.054971797s for enable addons: enabled=[]
	W1212 01:35:14.536558  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:17.036393  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:19.036650  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:21.536481  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:23.536603  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:26.036848  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:28.536033  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:30.536109  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:32.536508  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:35.036562  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:37.036788  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:39.536591  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:42.035934  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:44.036480  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:46.536110  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:48.536515  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:50.536753  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:53.036546  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:55.535981  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:57.536499  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:59.536582  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:02.036574  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:04.036717  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:06.536543  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:09.036693  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:11.536613  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:13.536774  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:16.036849  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:18.536512  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:21.036609  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:23.536552  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:26.036483  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:28.036687  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:30.036781  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:32.039431  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:34.536636  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:37.036646  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:39.036700  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:41.536608  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:44.036506  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:46.535816  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:48.537737  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:51.035818  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:53.036491  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:55.036601  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:57.536480  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:59.536672  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:02.037994  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:04.536623  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:07.035836  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:09.035916  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:11.036562  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:13.536873  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:16.035841  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:18.035966  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:20.036082  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:22.036593  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:24.536591  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:26.536664  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:29.036444  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:31.535946  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:33.536464  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:36.036277  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:38.536154  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:40.536718  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:43.036535  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:45.036776  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:47.536481  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:49.536708  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:52.036566  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:54.036621  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:56.536427  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:59.036479  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:01.535866  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:03.536428  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:05.536804  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:08.035822  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:10.036632  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:12.036672  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:14.536663  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:17.036493  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:19.535924  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:21.535995  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:23.536531  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:25.536763  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:28.036701  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:30.036797  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:32.536552  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:35.039253  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:37.536673  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:39.543660  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:42.036561  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:44.536569  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:47.036537  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:49.536417  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:51.536523  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:53.536619  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:55.536688  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:58.036700  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:39:00.536335  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:39:02.536671  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:39:05.036449  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:39:07.036549  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:39:09.036731  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:39:11.536529  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:39:13.536580  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:39:16.035987  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:39:18.036847  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:39:18.536213  287206 node_ready.go:38] duration metric: took 6m0.000908955s for node "no-preload-361053" to be "Ready" ...
	I1212 01:39:18.539274  287206 out.go:203] 
	W1212 01:39:18.542145  287206 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1212 01:39:18.542166  287206 out.go:285] * 
	* 
	W1212 01:39:18.544311  287206 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 01:39:18.547291  287206 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p no-preload-361053 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-361053
helpers_test.go:244: (dbg) docker inspect no-preload-361053:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd",
	        "Created": "2025-12-12T01:22:53.604240637Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 287337,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T01:33:10.69835803Z",
	            "FinishedAt": "2025-12-12T01:33:09.357122497Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd/hostname",
	        "HostsPath": "/var/lib/docker/containers/68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd/hosts",
	        "LogPath": "/var/lib/docker/containers/68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd/68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd-json.log",
	        "Name": "/no-preload-361053",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-361053:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-361053",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd",
	                "LowerDir": "/var/lib/docker/overlay2/9169bb0f18d4483461fe8404c60d5cb61a51170a9dcf8e503c392f8c9d01abee-init/diff:/var/lib/docker/overlay2/4c0e5370e4fd7b4e6c6a79620ef377d7d55826709cd277e0cfa49c6005af0314/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9169bb0f18d4483461fe8404c60d5cb61a51170a9dcf8e503c392f8c9d01abee/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9169bb0f18d4483461fe8404c60d5cb61a51170a9dcf8e503c392f8c9d01abee/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9169bb0f18d4483461fe8404c60d5cb61a51170a9dcf8e503c392f8c9d01abee/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-361053",
	                "Source": "/var/lib/docker/volumes/no-preload-361053/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-361053",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-361053",
	                "name.minikube.sigs.k8s.io": "no-preload-361053",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "61cc494dd067263f866e7781df4148bb8c831ce7801f7a97e8775eb48f40b482",
	            "SandboxKey": "/var/run/docker/netns/61cc494dd067",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-361053": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0a:bb:a3:34:c6:7e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ee086efedb5c3900c251cd31f9316499408470e70a7d486e64d8b91c6bf60cd7",
	                    "EndpointID": "f480dff36972a9a192fc5dc57b92877bed5645512d8423e9e85ac35e1acb41cd",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-361053",
	                        "68256fe8de3b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-361053 -n no-preload-361053
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-361053 -n no-preload-361053: exit status 2 (341.951892ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-361053 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p no-preload-361053 logs -n 25: (1.301170815s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ image   │ default-k8s-diff-port-971096 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ pause   │ -p default-k8s-diff-port-971096 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ unpause │ -p default-k8s-diff-port-971096 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ delete  │ -p default-k8s-diff-port-971096                                                                                                                                                                                                                            │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ delete  │ -p default-k8s-diff-port-971096                                                                                                                                                                                                                            │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ delete  │ -p disable-driver-mounts-539158                                                                                                                                                                                                                            │ disable-driver-mounts-539158 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ start   │ -p no-preload-361053 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-361053            │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-648696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ stop    │ -p embed-certs-648696 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ addons  │ enable dashboard -p embed-certs-648696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ start   │ -p embed-certs-648696 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:24 UTC │
	│ image   │ embed-certs-648696 image list --format=json                                                                                                                                                                                                                │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ pause   │ -p embed-certs-648696 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ unpause │ -p embed-certs-648696 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ delete  │ -p embed-certs-648696                                                                                                                                                                                                                                      │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ delete  │ -p embed-certs-648696                                                                                                                                                                                                                                      │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ start   │ -p newest-cni-256959 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-256959            │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-361053 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-361053            │ jenkins │ v1.37.0 │ 12 Dec 25 01:31 UTC │                     │
	│ stop    │ -p no-preload-361053 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-361053            │ jenkins │ v1.37.0 │ 12 Dec 25 01:33 UTC │ 12 Dec 25 01:33 UTC │
	│ addons  │ enable dashboard -p no-preload-361053 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-361053            │ jenkins │ v1.37.0 │ 12 Dec 25 01:33 UTC │ 12 Dec 25 01:33 UTC │
	│ start   │ -p no-preload-361053 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-361053            │ jenkins │ v1.37.0 │ 12 Dec 25 01:33 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-256959 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-256959            │ jenkins │ v1.37.0 │ 12 Dec 25 01:33 UTC │                     │
	│ stop    │ -p newest-cni-256959 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-256959            │ jenkins │ v1.37.0 │ 12 Dec 25 01:35 UTC │ 12 Dec 25 01:35 UTC │
	│ addons  │ enable dashboard -p newest-cni-256959 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-256959            │ jenkins │ v1.37.0 │ 12 Dec 25 01:35 UTC │ 12 Dec 25 01:35 UTC │
	│ start   │ -p newest-cni-256959 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-256959            │ jenkins │ v1.37.0 │ 12 Dec 25 01:35 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 01:35:11
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 01:35:11.336080  291455 out.go:360] Setting OutFile to fd 1 ...
	I1212 01:35:11.336277  291455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:35:11.336290  291455 out.go:374] Setting ErrFile to fd 2...
	I1212 01:35:11.336296  291455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:35:11.336566  291455 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	I1212 01:35:11.336950  291455 out.go:368] Setting JSON to false
	I1212 01:35:11.337843  291455 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8258,"bootTime":1765495054,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1212 01:35:11.337913  291455 start.go:143] virtualization:  
	I1212 01:35:11.341103  291455 out.go:179] * [newest-cni-256959] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 01:35:11.345273  291455 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 01:35:11.345376  291455 notify.go:221] Checking for updates...
	I1212 01:35:11.351231  291455 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 01:35:11.354134  291455 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 01:35:11.357086  291455 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	I1212 01:35:11.359981  291455 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 01:35:11.363090  291455 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 01:35:11.366381  291455 config.go:182] Loaded profile config "newest-cni-256959": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 01:35:11.367076  291455 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 01:35:11.397719  291455 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 01:35:11.397845  291455 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:35:11.450218  291455 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 01:35:11.441400779 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:35:11.450324  291455 docker.go:319] overlay module found
	I1212 01:35:11.453495  291455 out.go:179] * Using the docker driver based on existing profile
	I1212 01:35:11.456257  291455 start.go:309] selected driver: docker
	I1212 01:35:11.456272  291455 start.go:927] validating driver "docker" against &{Name:newest-cni-256959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:35:11.456385  291455 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 01:35:11.457105  291455 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:35:11.512167  291455 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 01:35:11.503270098 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:35:11.512501  291455 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1212 01:35:11.512533  291455 cni.go:84] Creating CNI manager for ""
	I1212 01:35:11.512581  291455 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 01:35:11.512620  291455 start.go:353] cluster config:
	{Name:newest-cni-256959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:35:11.517595  291455 out.go:179] * Starting "newest-cni-256959" primary control-plane node in "newest-cni-256959" cluster
	I1212 01:35:11.520355  291455 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1212 01:35:11.523510  291455 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 01:35:11.526310  291455 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 01:35:11.526350  291455 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1212 01:35:11.526380  291455 cache.go:65] Caching tarball of preloaded images
	I1212 01:35:11.526401  291455 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 01:35:11.526463  291455 preload.go:238] Found /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1212 01:35:11.526474  291455 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1212 01:35:11.526577  291455 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/config.json ...
	I1212 01:35:11.545949  291455 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 01:35:11.545972  291455 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 01:35:11.545990  291455 cache.go:243] Successfully downloaded all kic artifacts
	I1212 01:35:11.546021  291455 start.go:360] acquireMachinesLock for newest-cni-256959: {Name:mke4c35c218ad59b1da2c46074b57e71134fc7be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:35:11.546106  291455 start.go:364] duration metric: took 61.449µs to acquireMachinesLock for "newest-cni-256959"
	I1212 01:35:11.546128  291455 start.go:96] Skipping create...Using existing machine configuration
	I1212 01:35:11.546140  291455 fix.go:54] fixHost starting: 
	I1212 01:35:11.546394  291455 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Status}}
	I1212 01:35:11.562986  291455 fix.go:112] recreateIfNeeded on newest-cni-256959: state=Stopped err=<nil>
	W1212 01:35:11.563044  291455 fix.go:138] unexpected machine state, will restart: <nil>
	W1212 01:35:12.535792  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:35:12.641222  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:35:12.704850  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:35:12.704951  287206 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 01:35:12.708213  287206 out.go:179] * Enabled addons: 
	I1212 01:35:12.711265  287206 addons.go:530] duration metric: took 1m55.054971797s for enable addons: enabled=[]
	W1212 01:35:14.536558  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:35:11.566225  291455 out.go:252] * Restarting existing docker container for "newest-cni-256959" ...
	I1212 01:35:11.566307  291455 cli_runner.go:164] Run: docker start newest-cni-256959
	I1212 01:35:11.824711  291455 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Status}}
	I1212 01:35:11.850549  291455 kic.go:430] container "newest-cni-256959" state is running.
	I1212 01:35:11.850948  291455 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-256959
	I1212 01:35:11.874496  291455 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/config.json ...
	I1212 01:35:11.875491  291455 machine.go:94] provisionDockerMachine start ...
	I1212 01:35:11.875566  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:11.904543  291455 main.go:143] libmachine: Using SSH client type: native
	I1212 01:35:11.904867  291455 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1212 01:35:11.904894  291455 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 01:35:11.905649  291455 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 01:35:15.062841  291455 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-256959
	
	I1212 01:35:15.062884  291455 ubuntu.go:182] provisioning hostname "newest-cni-256959"
	I1212 01:35:15.062966  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:15.081374  291455 main.go:143] libmachine: Using SSH client type: native
	I1212 01:35:15.081715  291455 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1212 01:35:15.081732  291455 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-256959 && echo "newest-cni-256959" | sudo tee /etc/hostname
	I1212 01:35:15.244594  291455 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-256959
	
	I1212 01:35:15.244717  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:15.262885  291455 main.go:143] libmachine: Using SSH client type: native
	I1212 01:35:15.263226  291455 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1212 01:35:15.263249  291455 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-256959' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-256959/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-256959' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:35:15.415381  291455 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:35:15.415407  291455 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-2343/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-2343/.minikube}
	I1212 01:35:15.415450  291455 ubuntu.go:190] setting up certificates
	I1212 01:35:15.415469  291455 provision.go:84] configureAuth start
	I1212 01:35:15.415542  291455 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-256959
	I1212 01:35:15.432184  291455 provision.go:143] copyHostCerts
	I1212 01:35:15.432260  291455 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem, removing ...
	I1212 01:35:15.432274  291455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem
	I1212 01:35:15.432771  291455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem (1082 bytes)
	I1212 01:35:15.432891  291455 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem, removing ...
	I1212 01:35:15.432905  291455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem
	I1212 01:35:15.432935  291455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem (1123 bytes)
	I1212 01:35:15.433008  291455 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem, removing ...
	I1212 01:35:15.433018  291455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem
	I1212 01:35:15.433044  291455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem (1675 bytes)
	I1212 01:35:15.433100  291455 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem org=jenkins.newest-cni-256959 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-256959]
	I1212 01:35:15.664957  291455 provision.go:177] copyRemoteCerts
	I1212 01:35:15.665025  291455 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:35:15.665084  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:15.682010  291455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:35:15.786690  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 01:35:15.804464  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 01:35:15.821597  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 01:35:15.838753  291455 provision.go:87] duration metric: took 423.263374ms to configureAuth
	I1212 01:35:15.838782  291455 ubuntu.go:206] setting minikube options for container-runtime
	I1212 01:35:15.839040  291455 config.go:182] Loaded profile config "newest-cni-256959": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 01:35:15.839053  291455 machine.go:97] duration metric: took 3.963544394s to provisionDockerMachine
	I1212 01:35:15.839061  291455 start.go:293] postStartSetup for "newest-cni-256959" (driver="docker")
	I1212 01:35:15.839072  291455 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:35:15.839119  291455 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:35:15.839169  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:15.855712  291455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:35:15.959303  291455 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:35:15.962341  291455 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 01:35:15.962368  291455 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 01:35:15.962380  291455 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/addons for local assets ...
	I1212 01:35:15.962429  291455 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/files for local assets ...
	I1212 01:35:15.962509  291455 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem -> 42902.pem in /etc/ssl/certs
	I1212 01:35:15.962609  291455 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:35:15.969472  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /etc/ssl/certs/42902.pem (1708 bytes)
	I1212 01:35:15.986194  291455 start.go:296] duration metric: took 147.119175ms for postStartSetup
	I1212 01:35:15.986304  291455 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 01:35:15.986375  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:16.005019  291455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:35:16.107859  291455 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 01:35:16.112663  291455 fix.go:56] duration metric: took 4.566516262s for fixHost
	I1212 01:35:16.112691  291455 start.go:83] releasing machines lock for "newest-cni-256959", held for 4.566573288s
	I1212 01:35:16.112760  291455 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-256959
	I1212 01:35:16.129477  291455 ssh_runner.go:195] Run: cat /version.json
	I1212 01:35:16.129531  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:16.129775  291455 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:35:16.129824  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:16.153158  291455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:35:16.155921  291455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:35:16.367474  291455 ssh_runner.go:195] Run: systemctl --version
	I1212 01:35:16.373832  291455 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:35:16.378022  291455 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:35:16.378104  291455 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:35:16.385747  291455 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 01:35:16.385772  291455 start.go:496] detecting cgroup driver to use...
	I1212 01:35:16.385819  291455 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 01:35:16.385882  291455 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 01:35:16.403657  291455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 01:35:16.417469  291455 docker.go:218] disabling cri-docker service (if available) ...
	I1212 01:35:16.417564  291455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:35:16.433612  291455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:35:16.446861  291455 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:35:16.554018  291455 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:35:16.672193  291455 docker.go:234] disabling docker service ...
	I1212 01:35:16.672283  291455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:35:16.687238  291455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:35:16.700659  291455 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:35:16.812563  291455 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:35:16.928270  291455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:35:16.941185  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:35:16.957067  291455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 01:35:16.966276  291455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 01:35:16.975221  291455 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 01:35:16.975292  291455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 01:35:16.984294  291455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 01:35:16.993328  291455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 01:35:17.004796  291455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 01:35:17.015289  291455 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:35:17.023922  291455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 01:35:17.036658  291455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 01:35:17.046732  291455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 01:35:17.056354  291455 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:35:17.064063  291455 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:35:17.071833  291455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:35:17.188012  291455 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 01:35:17.306110  291455 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1212 01:35:17.306231  291455 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1212 01:35:17.309882  291455 start.go:564] Will wait 60s for crictl version
	I1212 01:35:17.309968  291455 ssh_runner.go:195] Run: which crictl
	I1212 01:35:17.313475  291455 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 01:35:17.340045  291455 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1212 01:35:17.340140  291455 ssh_runner.go:195] Run: containerd --version
	I1212 01:35:17.360301  291455 ssh_runner.go:195] Run: containerd --version
	I1212 01:35:17.385714  291455 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1212 01:35:17.388490  291455 cli_runner.go:164] Run: docker network inspect newest-cni-256959 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 01:35:17.404979  291455 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1212 01:35:17.409350  291455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:35:17.422610  291455 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1212 01:35:17.425426  291455 kubeadm.go:884] updating cluster {Name:newest-cni-256959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:35:17.425578  291455 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 01:35:17.425675  291455 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:35:17.450191  291455 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 01:35:17.450217  291455 containerd.go:534] Images already preloaded, skipping extraction
	I1212 01:35:17.450277  291455 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:35:17.474185  291455 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 01:35:17.474220  291455 cache_images.go:86] Images are preloaded, skipping loading
	I1212 01:35:17.474228  291455 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1212 01:35:17.474373  291455 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-256959 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:35:17.474472  291455 ssh_runner.go:195] Run: sudo crictl info
	I1212 01:35:17.498662  291455 cni.go:84] Creating CNI manager for ""
	I1212 01:35:17.498685  291455 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 01:35:17.498869  291455 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1212 01:35:17.498905  291455 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-256959 NodeName:newest-cni-256959 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 01:35:17.499182  291455 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-256959"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:35:17.499276  291455 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 01:35:17.511920  291455 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 01:35:17.512017  291455 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:35:17.519602  291455 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1212 01:35:17.532107  291455 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 01:35:17.545262  291455 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1212 01:35:17.557618  291455 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1212 01:35:17.561053  291455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:35:17.570894  291455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:35:17.675958  291455 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:35:17.692695  291455 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959 for IP: 192.168.76.2
	I1212 01:35:17.692715  291455 certs.go:195] generating shared ca certs ...
	I1212 01:35:17.692750  291455 certs.go:227] acquiring lock for ca certs: {Name:mk18ed2fce74cbc4ee01c0f71e2dbdd98ccce1cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:35:17.692911  291455 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key
	I1212 01:35:17.692980  291455 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key
	I1212 01:35:17.692995  291455 certs.go:257] generating profile certs ...
	I1212 01:35:17.693112  291455 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/client.key
	I1212 01:35:17.693202  291455 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.key.b05ecb93
	I1212 01:35:17.693309  291455 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.key
	I1212 01:35:17.693447  291455 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem (1338 bytes)
	W1212 01:35:17.693518  291455 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290_empty.pem, impossibly tiny 0 bytes
	I1212 01:35:17.693536  291455 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 01:35:17.693582  291455 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem (1082 bytes)
	I1212 01:35:17.693632  291455 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:35:17.693666  291455 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem (1675 bytes)
	I1212 01:35:17.693747  291455 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem (1708 bytes)
	I1212 01:35:17.694397  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:35:17.712974  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 01:35:17.738035  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:35:17.758905  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:35:17.776423  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 01:35:17.805243  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 01:35:17.826665  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:35:17.847012  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 01:35:17.868946  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:35:17.887272  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem --> /usr/share/ca-certificates/4290.pem (1338 bytes)
	I1212 01:35:17.904023  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /usr/share/ca-certificates/42902.pem (1708 bytes)
	I1212 01:35:17.920802  291455 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:35:17.933645  291455 ssh_runner.go:195] Run: openssl version
	I1212 01:35:17.939797  291455 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:35:17.946909  291455 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 01:35:17.954537  291455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:35:17.958217  291455 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:35:17.958301  291455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:35:17.998878  291455 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 01:35:18.008093  291455 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4290.pem
	I1212 01:35:18.016725  291455 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4290.pem /etc/ssl/certs/4290.pem
	I1212 01:35:18.025237  291455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4290.pem
	I1212 01:35:18.029387  291455 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:06 /usr/share/ca-certificates/4290.pem
	I1212 01:35:18.029458  291455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4290.pem
	I1212 01:35:18.072423  291455 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 01:35:18.080329  291455 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42902.pem
	I1212 01:35:18.088043  291455 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42902.pem /etc/ssl/certs/42902.pem
	I1212 01:35:18.095703  291455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42902.pem
	I1212 01:35:18.100065  291455 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:06 /usr/share/ca-certificates/42902.pem
	I1212 01:35:18.100135  291455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42902.pem
	I1212 01:35:18.141016  291455 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 01:35:18.148423  291455 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:35:18.152541  291455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 01:35:18.195372  291455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 01:35:18.236073  291455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 01:35:18.276924  291455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 01:35:18.317697  291455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 01:35:18.358213  291455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 01:35:18.400083  291455 kubeadm.go:401] StartCluster: {Name:newest-cni-256959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:35:18.400177  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1212 01:35:18.400236  291455 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:35:18.437669  291455 cri.go:89] found id: ""
	I1212 01:35:18.437744  291455 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:35:18.446134  291455 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 01:35:18.446156  291455 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 01:35:18.446208  291455 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 01:35:18.453928  291455 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 01:35:18.454522  291455 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-256959" does not appear in /home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 01:35:18.454766  291455 kubeconfig.go:62] /home/jenkins/minikube-integration/22101-2343/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-256959" cluster setting kubeconfig missing "newest-cni-256959" context setting]
	I1212 01:35:18.455226  291455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/kubeconfig: {Name:mk3e2cbffa089f0a26351f5f6a49b8ed3b66edd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:35:18.456674  291455 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 01:35:18.464597  291455 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1212 01:35:18.464630  291455 kubeadm.go:602] duration metric: took 18.46826ms to restartPrimaryControlPlane
	I1212 01:35:18.464640  291455 kubeadm.go:403] duration metric: took 64.568702ms to StartCluster
	I1212 01:35:18.464656  291455 settings.go:142] acquiring lock: {Name:mk6dd4250df69aeba4752e9f33aeef37272375c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:35:18.464716  291455 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 01:35:18.465619  291455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/kubeconfig: {Name:mk3e2cbffa089f0a26351f5f6a49b8ed3b66edd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:35:18.465827  291455 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1212 01:35:18.466211  291455 config.go:182] Loaded profile config "newest-cni-256959": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 01:35:18.466236  291455 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 01:35:18.466355  291455 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-256959"
	I1212 01:35:18.466367  291455 addons.go:70] Setting dashboard=true in profile "newest-cni-256959"
	I1212 01:35:18.466371  291455 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-256959"
	I1212 01:35:18.466378  291455 addons.go:239] Setting addon dashboard=true in "newest-cni-256959"
	W1212 01:35:18.466385  291455 addons.go:248] addon dashboard should already be in state true
	I1212 01:35:18.466396  291455 host.go:66] Checking if "newest-cni-256959" exists ...
	I1212 01:35:18.466403  291455 host.go:66] Checking if "newest-cni-256959" exists ...
	I1212 01:35:18.466836  291455 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Status}}
	I1212 01:35:18.466869  291455 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Status}}
	I1212 01:35:18.467337  291455 addons.go:70] Setting default-storageclass=true in profile "newest-cni-256959"
	I1212 01:35:18.467363  291455 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-256959"
	I1212 01:35:18.467641  291455 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Status}}
	I1212 01:35:18.469758  291455 out.go:179] * Verifying Kubernetes components...
	I1212 01:35:18.473053  291455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:35:18.505578  291455 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:35:18.507992  291455 addons.go:239] Setting addon default-storageclass=true in "newest-cni-256959"
	I1212 01:35:18.508032  291455 host.go:66] Checking if "newest-cni-256959" exists ...
	I1212 01:35:18.508443  291455 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Status}}
	I1212 01:35:18.515343  291455 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:35:18.515364  291455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 01:35:18.515428  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:18.518345  291455 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1212 01:35:18.523100  291455 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1212 01:35:17.036393  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:19.036650  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:35:18.525972  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1212 01:35:18.526002  291455 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1212 01:35:18.526079  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:18.564602  291455 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 01:35:18.564630  291455 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 01:35:18.564700  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:18.565404  291455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:35:18.592490  291455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:35:18.614974  291455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:35:18.707284  291455 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:35:18.738514  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:35:18.783779  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1212 01:35:18.783804  291455 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1212 01:35:18.797813  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:35:18.817201  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1212 01:35:18.817275  291455 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1212 01:35:18.834247  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1212 01:35:18.834268  291455 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1212 01:35:18.850261  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1212 01:35:18.850281  291455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1212 01:35:18.864878  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1212 01:35:18.864902  291455 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1212 01:35:18.879989  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1212 01:35:18.880012  291455 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1212 01:35:18.893252  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1212 01:35:18.893275  291455 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1212 01:35:18.906457  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1212 01:35:18.906522  291455 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1212 01:35:18.919410  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 01:35:18.919484  291455 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1212 01:35:18.931957  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 01:35:19.295481  291455 api_server.go:52] waiting for apiserver process to appear ...
	W1212 01:35:19.295638  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.295690  291455 retry.go:31] will retry after 249.842732ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:35:19.295768  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.295783  291455 retry.go:31] will retry after 351.420897ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:35:19.296118  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.296142  291455 retry.go:31] will retry after 281.426587ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.296213  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:19.546048  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:35:19.578494  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:35:19.622946  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.623064  291455 retry.go:31] will retry after 277.166543ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.648375  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:35:19.656309  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.656406  291455 retry.go:31] will retry after 462.607475ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:35:19.715463  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.715506  291455 retry.go:31] will retry after 556.232924ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.796674  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:19.900383  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:35:19.963236  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.963266  291455 retry.go:31] will retry after 505.253944ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.119589  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:35:20.186519  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.186613  291455 retry.go:31] will retry after 424.835438ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.272893  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:35:20.296648  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:20.336051  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.336183  291455 retry.go:31] will retry after 483.909657ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.469348  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:35:20.528062  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.528096  291455 retry.go:31] will retry after 804.643976ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.612336  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:35:20.682501  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.682548  291455 retry.go:31] will retry after 558.97301ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.795783  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:20.820454  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:35:20.905698  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.905732  291455 retry.go:31] will retry after 695.755311ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:21.242222  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 01:35:21.295663  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:21.312788  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:21.312824  291455 retry.go:31] will retry after 1.866088371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:21.333223  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:35:21.536481  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:23.536603  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:21.395495  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:21.395527  291455 retry.go:31] will retry after 1.442265452s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:21.601699  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:35:21.661918  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:21.661958  291455 retry.go:31] will retry after 965.923553ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:21.796193  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:22.296596  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:22.628164  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:35:22.689983  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:22.690024  291455 retry.go:31] will retry after 2.419076287s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:22.796215  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:22.838490  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:35:22.896567  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:22.896595  291455 retry.go:31] will retry after 1.026441386s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:23.180088  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:35:23.242606  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:23.242641  291455 retry.go:31] will retry after 1.447175367s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:23.295985  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:23.795677  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:23.924269  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:35:23.999262  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:23.999301  291455 retry.go:31] will retry after 3.676300513s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:24.295744  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:24.690891  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:35:24.751142  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:24.751178  291455 retry.go:31] will retry after 2.523379824s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:24.796474  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:25.109290  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:35:25.170081  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:25.170117  291455 retry.go:31] will retry after 1.61445699s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:25.296317  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:25.796411  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:26.295885  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:26.036848  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:28.536033  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:35:26.784844  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:35:26.796101  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:26.910864  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:26.910893  291455 retry.go:31] will retry after 5.25056634s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:27.275356  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 01:35:27.295815  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:27.348749  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:27.348785  291455 retry.go:31] will retry after 4.97523733s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:27.676221  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:35:27.738144  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:27.738177  291455 retry.go:31] will retry after 5.096436926s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:27.796329  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:28.296194  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:28.795721  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:29.296646  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:29.795689  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:30.295694  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:30.796607  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:31.296202  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:30.536109  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:32.536508  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:35.036562  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:35:31.795914  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:32.161653  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:35:32.223763  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:32.223796  291455 retry.go:31] will retry after 3.268815276s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:32.296204  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:32.325119  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:35:32.386121  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:32.386153  291455 retry.go:31] will retry after 5.854435808s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:32.796226  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:32.834968  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:35:32.909984  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:32.910017  291455 retry.go:31] will retry after 7.163447884s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:33.296541  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:33.796667  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:34.295628  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:34.796652  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:35.295756  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:35.493366  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:35:35.556021  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:35.556054  291455 retry.go:31] will retry after 12.955659755s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:35.796356  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:36.296236  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:37.036788  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:39.536591  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:35:36.796391  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:37.295746  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:37.795722  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:38.241525  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 01:35:38.295983  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:38.315189  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:38.315224  291455 retry.go:31] will retry after 8.402358708s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:38.795800  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:39.296313  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:39.795769  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:40.074570  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:35:40.142371  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:40.142407  291455 retry.go:31] will retry after 11.797804339s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:40.295684  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:40.795715  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:41.295800  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:42.035934  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:44.036480  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:35:41.796201  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:42.295677  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:42.795870  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:43.296206  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:43.795818  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:44.295727  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:44.795706  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:45.296501  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:45.795731  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:46.296084  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:46.536110  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:48.536515  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:35:46.717860  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:35:46.778291  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:46.778324  291455 retry.go:31] will retry after 11.640937008s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:46.796419  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:47.296365  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:47.796242  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:48.295728  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:48.512617  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:35:48.620306  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:48.620334  291455 retry.go:31] will retry after 20.936993287s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:48.795684  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:49.296228  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:49.796588  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:50.296351  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:50.796261  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:51.296609  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:50.536753  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:53.036546  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:35:51.796731  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:51.941351  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:35:52.001637  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:52.001682  291455 retry.go:31] will retry after 15.364088557s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:52.296092  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:52.795636  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:53.296512  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:53.811922  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:54.295780  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:54.795777  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:55.296163  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:55.796273  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:56.295752  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:55.535981  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:57.536499  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:59.536582  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:35:56.795693  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:57.295887  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:57.796459  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:58.296209  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:58.419661  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:35:58.488403  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:58.488438  291455 retry.go:31] will retry after 29.791340434s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:58.796698  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:59.295744  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:59.796477  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:00.295794  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:00.795759  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:01.296237  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:36:02.036574  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:04.036717  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:01.796304  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:02.296424  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:02.795750  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:03.296298  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:03.796668  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:04.296158  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:04.796345  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:05.296665  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:05.796526  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:06.295717  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:36:06.536543  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:09.036693  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:06.795806  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:07.296383  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:07.366524  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:36:07.433303  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:36:07.433335  291455 retry.go:31] will retry after 21.959421138s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:36:07.795756  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:08.296562  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:08.795685  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:09.295744  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:09.558068  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:36:09.643748  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:36:09.643785  291455 retry.go:31] will retry after 31.140330108s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:36:09.796018  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:10.295683  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:10.795744  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:11.295780  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:36:11.536613  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:13.536774  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:11.795645  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:12.295717  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:12.795762  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:13.296234  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:13.795775  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:14.296543  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:14.796297  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:15.295763  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:15.795884  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:16.296551  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:36:16.036849  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:18.536512  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:16.796640  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:17.295760  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:17.796208  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:18.296641  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:18.795858  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:18.795946  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:18.819559  291455 cri.go:89] found id: ""
	I1212 01:36:18.819585  291455 logs.go:282] 0 containers: []
	W1212 01:36:18.819594  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:18.819605  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:18.819671  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:18.843419  291455 cri.go:89] found id: ""
	I1212 01:36:18.843444  291455 logs.go:282] 0 containers: []
	W1212 01:36:18.843453  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:18.843459  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:18.843524  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:18.867870  291455 cri.go:89] found id: ""
	I1212 01:36:18.867894  291455 logs.go:282] 0 containers: []
	W1212 01:36:18.867903  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:18.867910  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:18.867975  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:18.892504  291455 cri.go:89] found id: ""
	I1212 01:36:18.892528  291455 logs.go:282] 0 containers: []
	W1212 01:36:18.892536  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:18.892543  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:18.892614  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:18.916462  291455 cri.go:89] found id: ""
	I1212 01:36:18.916484  291455 logs.go:282] 0 containers: []
	W1212 01:36:18.916493  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:18.916499  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:18.916555  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:18.940793  291455 cri.go:89] found id: ""
	I1212 01:36:18.940818  291455 logs.go:282] 0 containers: []
	W1212 01:36:18.940827  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:18.940833  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:18.940892  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:18.965485  291455 cri.go:89] found id: ""
	I1212 01:36:18.965513  291455 logs.go:282] 0 containers: []
	W1212 01:36:18.965521  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:18.965527  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:18.965585  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:18.990141  291455 cri.go:89] found id: ""
	I1212 01:36:18.990170  291455 logs.go:282] 0 containers: []
	W1212 01:36:18.990179  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:18.990189  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:18.990202  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:19.044826  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:19.044860  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:19.058338  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:19.058373  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:19.121541  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:19.113010    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:19.113711    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:19.115490    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:19.116077    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:19.117640    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:19.113010    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:19.113711    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:19.115490    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:19.116077    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:19.117640    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:19.121602  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:19.121622  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:19.146904  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:19.146941  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:36:21.036609  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:23.536552  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:21.678937  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:21.689641  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:21.689710  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:21.722833  291455 cri.go:89] found id: ""
	I1212 01:36:21.722854  291455 logs.go:282] 0 containers: []
	W1212 01:36:21.722862  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:21.722869  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:21.722926  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:21.747286  291455 cri.go:89] found id: ""
	I1212 01:36:21.747323  291455 logs.go:282] 0 containers: []
	W1212 01:36:21.747339  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:21.747346  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:21.747417  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:21.771941  291455 cri.go:89] found id: ""
	I1212 01:36:21.771965  291455 logs.go:282] 0 containers: []
	W1212 01:36:21.771980  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:21.771987  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:21.772052  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:21.801075  291455 cri.go:89] found id: ""
	I1212 01:36:21.801104  291455 logs.go:282] 0 containers: []
	W1212 01:36:21.801113  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:21.801119  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:21.801176  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:21.825561  291455 cri.go:89] found id: ""
	I1212 01:36:21.825587  291455 logs.go:282] 0 containers: []
	W1212 01:36:21.825595  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:21.825601  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:21.825659  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:21.854532  291455 cri.go:89] found id: ""
	I1212 01:36:21.854559  291455 logs.go:282] 0 containers: []
	W1212 01:36:21.854569  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:21.854580  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:21.854640  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:21.879725  291455 cri.go:89] found id: ""
	I1212 01:36:21.879789  291455 logs.go:282] 0 containers: []
	W1212 01:36:21.879814  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:21.879828  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:21.879912  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:21.904405  291455 cri.go:89] found id: ""
	I1212 01:36:21.904428  291455 logs.go:282] 0 containers: []
	W1212 01:36:21.904437  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:21.904446  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:21.904487  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:21.970611  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:21.962223    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:21.962657    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:21.964375    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:21.964860    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:21.966282    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:21.962223    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:21.962657    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:21.964375    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:21.964860    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:21.966282    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:21.970642  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:21.970659  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:21.995425  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:21.995463  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:22.024736  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:22.024767  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:22.082740  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:22.082785  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:24.597828  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:24.608497  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:24.608573  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:24.633951  291455 cri.go:89] found id: ""
	I1212 01:36:24.633978  291455 logs.go:282] 0 containers: []
	W1212 01:36:24.633986  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:24.633992  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:24.634048  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:24.658904  291455 cri.go:89] found id: ""
	I1212 01:36:24.658929  291455 logs.go:282] 0 containers: []
	W1212 01:36:24.658937  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:24.658944  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:24.659026  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:24.683684  291455 cri.go:89] found id: ""
	I1212 01:36:24.683709  291455 logs.go:282] 0 containers: []
	W1212 01:36:24.683718  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:24.683724  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:24.683791  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:24.708745  291455 cri.go:89] found id: ""
	I1212 01:36:24.708770  291455 logs.go:282] 0 containers: []
	W1212 01:36:24.708779  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:24.708786  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:24.708842  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:24.733454  291455 cri.go:89] found id: ""
	I1212 01:36:24.733479  291455 logs.go:282] 0 containers: []
	W1212 01:36:24.733488  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:24.733494  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:24.733551  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:24.761862  291455 cri.go:89] found id: ""
	I1212 01:36:24.761889  291455 logs.go:282] 0 containers: []
	W1212 01:36:24.761898  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:24.761904  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:24.761961  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:24.785388  291455 cri.go:89] found id: ""
	I1212 01:36:24.785415  291455 logs.go:282] 0 containers: []
	W1212 01:36:24.785424  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:24.785430  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:24.785486  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:24.810681  291455 cri.go:89] found id: ""
	I1212 01:36:24.810707  291455 logs.go:282] 0 containers: []
	W1212 01:36:24.810717  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:24.810727  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:24.810743  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:24.865711  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:24.865752  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:24.880399  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:24.880431  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:24.943187  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:24.935391    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:24.936083    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:24.937614    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:24.937904    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:24.939457    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:24.935391    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:24.936083    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:24.937614    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:24.937904    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:24.939457    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:24.943253  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:24.943274  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:24.967790  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:24.967820  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:36:26.036483  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:28.036687  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:30.036781  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:27.495634  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:27.506605  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:27.506700  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:27.548836  291455 cri.go:89] found id: ""
	I1212 01:36:27.548864  291455 logs.go:282] 0 containers: []
	W1212 01:36:27.548873  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:27.548879  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:27.548953  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:27.600295  291455 cri.go:89] found id: ""
	I1212 01:36:27.600324  291455 logs.go:282] 0 containers: []
	W1212 01:36:27.600334  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:27.600340  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:27.600397  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:27.625951  291455 cri.go:89] found id: ""
	I1212 01:36:27.625979  291455 logs.go:282] 0 containers: []
	W1212 01:36:27.625987  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:27.625993  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:27.626062  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:27.651635  291455 cri.go:89] found id: ""
	I1212 01:36:27.651660  291455 logs.go:282] 0 containers: []
	W1212 01:36:27.651668  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:27.651675  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:27.651734  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:27.676415  291455 cri.go:89] found id: ""
	I1212 01:36:27.676437  291455 logs.go:282] 0 containers: []
	W1212 01:36:27.676446  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:27.676473  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:27.676535  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:27.699845  291455 cri.go:89] found id: ""
	I1212 01:36:27.699868  291455 logs.go:282] 0 containers: []
	W1212 01:36:27.699876  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:27.699883  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:27.699938  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:27.735327  291455 cri.go:89] found id: ""
	I1212 01:36:27.735353  291455 logs.go:282] 0 containers: []
	W1212 01:36:27.735362  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:27.735368  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:27.735428  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:27.759909  291455 cri.go:89] found id: ""
	I1212 01:36:27.759932  291455 logs.go:282] 0 containers: []
	W1212 01:36:27.759940  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:27.759950  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:27.759961  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:27.786638  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:27.786667  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:27.841026  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:27.841058  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:27.854475  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:27.854508  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:27.917832  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:27.909374    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:27.909866    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:27.911432    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:27.911952    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:27.913437    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:27.909374    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:27.909866    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:27.911432    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:27.911952    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:27.913437    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:27.917855  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:27.917867  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:28.286241  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:36:28.389245  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:36:28.389279  291455 retry.go:31] will retry after 46.053342505s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:36:29.393036  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:36:29.455460  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:36:29.455496  291455 retry.go:31] will retry after 47.570792587s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:36:30.443136  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:30.453668  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:30.453743  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:30.480117  291455 cri.go:89] found id: ""
	I1212 01:36:30.480141  291455 logs.go:282] 0 containers: []
	W1212 01:36:30.480149  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:30.480155  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:30.480214  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:30.505432  291455 cri.go:89] found id: ""
	I1212 01:36:30.505460  291455 logs.go:282] 0 containers: []
	W1212 01:36:30.505470  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:30.505478  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:30.505543  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:30.530571  291455 cri.go:89] found id: ""
	I1212 01:36:30.530598  291455 logs.go:282] 0 containers: []
	W1212 01:36:30.530608  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:30.530614  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:30.530675  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:30.587393  291455 cri.go:89] found id: ""
	I1212 01:36:30.587429  291455 logs.go:282] 0 containers: []
	W1212 01:36:30.587439  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:30.587445  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:30.587517  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:30.631827  291455 cri.go:89] found id: ""
	I1212 01:36:30.631894  291455 logs.go:282] 0 containers: []
	W1212 01:36:30.631917  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:30.631941  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:30.632019  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:30.655968  291455 cri.go:89] found id: ""
	I1212 01:36:30.656043  291455 logs.go:282] 0 containers: []
	W1212 01:36:30.656065  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:30.656077  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:30.656143  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:30.680079  291455 cri.go:89] found id: ""
	I1212 01:36:30.680101  291455 logs.go:282] 0 containers: []
	W1212 01:36:30.680110  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:30.680116  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:30.680175  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:30.704249  291455 cri.go:89] found id: ""
	I1212 01:36:30.704324  291455 logs.go:282] 0 containers: []
	W1212 01:36:30.704346  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:30.704365  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:30.704391  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:30.760587  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:30.760620  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:30.774118  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:30.774145  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:30.838730  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:30.831029    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:30.831642    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:30.833120    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:30.833546    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:30.835035    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:30.831029    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:30.831642    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:30.833120    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:30.833546    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:30.835035    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:30.838753  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:30.838765  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:30.863650  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:30.863684  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:36:32.039431  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:34.536636  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:33.391024  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:33.401417  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:33.401486  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:33.425243  291455 cri.go:89] found id: ""
	I1212 01:36:33.425265  291455 logs.go:282] 0 containers: []
	W1212 01:36:33.425274  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:33.425280  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:33.425337  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:33.451769  291455 cri.go:89] found id: ""
	I1212 01:36:33.451792  291455 logs.go:282] 0 containers: []
	W1212 01:36:33.451800  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:33.451806  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:33.451869  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:33.476935  291455 cri.go:89] found id: ""
	I1212 01:36:33.476960  291455 logs.go:282] 0 containers: []
	W1212 01:36:33.476968  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:33.476974  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:33.477035  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:33.502755  291455 cri.go:89] found id: ""
	I1212 01:36:33.502781  291455 logs.go:282] 0 containers: []
	W1212 01:36:33.502796  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:33.502802  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:33.502859  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:33.528810  291455 cri.go:89] found id: ""
	I1212 01:36:33.528835  291455 logs.go:282] 0 containers: []
	W1212 01:36:33.528844  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:33.528851  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:33.528915  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:33.559119  291455 cri.go:89] found id: ""
	I1212 01:36:33.559197  291455 logs.go:282] 0 containers: []
	W1212 01:36:33.559219  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:33.559237  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:33.559321  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:33.624518  291455 cri.go:89] found id: ""
	I1212 01:36:33.624547  291455 logs.go:282] 0 containers: []
	W1212 01:36:33.624556  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:33.624562  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:33.624620  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:33.657379  291455 cri.go:89] found id: ""
	I1212 01:36:33.657401  291455 logs.go:282] 0 containers: []
	W1212 01:36:33.657409  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:33.657418  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:33.657428  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:33.713396  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:33.713430  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:33.727420  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:33.727450  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:33.796759  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:33.788822    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:33.789567    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:33.791169    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:33.791683    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:33.792822    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:33.788822    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:33.789567    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:33.791169    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:33.791683    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:33.792822    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:33.796782  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:33.796795  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:33.822210  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:33.822246  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:36:37.036646  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:39.036700  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:36.350581  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:36.361065  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:36.361139  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:36.384625  291455 cri.go:89] found id: ""
	I1212 01:36:36.384647  291455 logs.go:282] 0 containers: []
	W1212 01:36:36.384655  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:36.384661  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:36.384721  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:36.409313  291455 cri.go:89] found id: ""
	I1212 01:36:36.409338  291455 logs.go:282] 0 containers: []
	W1212 01:36:36.409347  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:36.409353  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:36.409414  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:36.437773  291455 cri.go:89] found id: ""
	I1212 01:36:36.437796  291455 logs.go:282] 0 containers: []
	W1212 01:36:36.437804  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:36.437811  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:36.437875  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:36.462058  291455 cri.go:89] found id: ""
	I1212 01:36:36.462080  291455 logs.go:282] 0 containers: []
	W1212 01:36:36.462089  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:36.462096  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:36.462158  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:36.485881  291455 cri.go:89] found id: ""
	I1212 01:36:36.485902  291455 logs.go:282] 0 containers: []
	W1212 01:36:36.485911  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:36.485917  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:36.485973  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:36.510249  291455 cri.go:89] found id: ""
	I1212 01:36:36.510318  291455 logs.go:282] 0 containers: []
	W1212 01:36:36.510340  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:36.510362  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:36.510444  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:36.546913  291455 cri.go:89] found id: ""
	I1212 01:36:36.546948  291455 logs.go:282] 0 containers: []
	W1212 01:36:36.546957  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:36.546963  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:36.547067  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:36.604532  291455 cri.go:89] found id: ""
	I1212 01:36:36.604562  291455 logs.go:282] 0 containers: []
	W1212 01:36:36.604571  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:36.604580  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:36.604593  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:36.684036  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:36.674581    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:36.675420    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:36.677203    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:36.677878    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:36.679666    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:36.674581    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:36.675420    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:36.677203    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:36.677878    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:36.679666    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:36.684061  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:36.684074  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:36.709835  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:36.709866  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:36.737742  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:36.737768  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:36.792829  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:36.792864  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:39.307416  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:39.317852  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:39.317952  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:39.342723  291455 cri.go:89] found id: ""
	I1212 01:36:39.342747  291455 logs.go:282] 0 containers: []
	W1212 01:36:39.342756  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:39.342763  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:39.342821  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:39.367433  291455 cri.go:89] found id: ""
	I1212 01:36:39.367472  291455 logs.go:282] 0 containers: []
	W1212 01:36:39.367485  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:39.367492  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:39.367559  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:39.392871  291455 cri.go:89] found id: ""
	I1212 01:36:39.392896  291455 logs.go:282] 0 containers: []
	W1212 01:36:39.392904  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:39.392911  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:39.392974  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:39.417519  291455 cri.go:89] found id: ""
	I1212 01:36:39.417546  291455 logs.go:282] 0 containers: []
	W1212 01:36:39.417555  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:39.417562  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:39.417621  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:39.441729  291455 cri.go:89] found id: ""
	I1212 01:36:39.441760  291455 logs.go:282] 0 containers: []
	W1212 01:36:39.441769  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:39.441775  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:39.441841  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:39.466118  291455 cri.go:89] found id: ""
	I1212 01:36:39.466147  291455 logs.go:282] 0 containers: []
	W1212 01:36:39.466156  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:39.466163  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:39.466225  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:39.491269  291455 cri.go:89] found id: ""
	I1212 01:36:39.491292  291455 logs.go:282] 0 containers: []
	W1212 01:36:39.491304  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:39.491310  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:39.491375  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:39.515625  291455 cri.go:89] found id: ""
	I1212 01:36:39.515650  291455 logs.go:282] 0 containers: []
	W1212 01:36:39.515659  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:39.515668  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:39.515679  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:39.595337  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:39.595376  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:39.617464  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:39.617500  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:39.698043  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:39.689431    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:39.689924    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:39.691689    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:39.692010    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:39.693641    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:39.689431    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:39.689924    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:39.691689    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:39.692010    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:39.693641    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:39.698068  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:39.698080  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:39.722656  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:39.722692  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:40.784380  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:36:40.845895  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:36:40.846018  291455 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1212 01:36:41.536608  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:44.036506  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:42.256252  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:42.269504  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:42.269576  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:42.296285  291455 cri.go:89] found id: ""
	I1212 01:36:42.296314  291455 logs.go:282] 0 containers: []
	W1212 01:36:42.296323  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:42.296330  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:42.296393  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:42.324314  291455 cri.go:89] found id: ""
	I1212 01:36:42.324349  291455 logs.go:282] 0 containers: []
	W1212 01:36:42.324366  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:42.324373  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:42.324448  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:42.353000  291455 cri.go:89] found id: ""
	I1212 01:36:42.353024  291455 logs.go:282] 0 containers: []
	W1212 01:36:42.353033  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:42.353039  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:42.353103  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:42.379029  291455 cri.go:89] found id: ""
	I1212 01:36:42.379057  291455 logs.go:282] 0 containers: []
	W1212 01:36:42.379066  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:42.379073  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:42.379141  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:42.404039  291455 cri.go:89] found id: ""
	I1212 01:36:42.404068  291455 logs.go:282] 0 containers: []
	W1212 01:36:42.404077  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:42.404084  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:42.404150  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:42.429848  291455 cri.go:89] found id: ""
	I1212 01:36:42.429877  291455 logs.go:282] 0 containers: []
	W1212 01:36:42.429887  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:42.429893  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:42.429952  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:42.454022  291455 cri.go:89] found id: ""
	I1212 01:36:42.454049  291455 logs.go:282] 0 containers: []
	W1212 01:36:42.454058  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:42.454065  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:42.454126  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:42.481205  291455 cri.go:89] found id: ""
	I1212 01:36:42.481231  291455 logs.go:282] 0 containers: []
	W1212 01:36:42.481240  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:42.481249  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:42.481260  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:42.511373  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:42.511400  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:42.594053  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:42.594092  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:42.613172  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:42.613201  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:42.688118  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:42.678899    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:42.679678    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:42.681197    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:42.681708    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:42.683477    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:42.678899    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:42.679678    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:42.681197    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:42.681708    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:42.683477    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:42.688142  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:42.688155  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:45.213644  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:45.234582  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:45.234677  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:45.268686  291455 cri.go:89] found id: ""
	I1212 01:36:45.268715  291455 logs.go:282] 0 containers: []
	W1212 01:36:45.268732  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:45.268741  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:45.268827  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:45.297061  291455 cri.go:89] found id: ""
	I1212 01:36:45.297115  291455 logs.go:282] 0 containers: []
	W1212 01:36:45.297132  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:45.297139  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:45.297272  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:45.324030  291455 cri.go:89] found id: ""
	I1212 01:36:45.324063  291455 logs.go:282] 0 containers: []
	W1212 01:36:45.324072  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:45.324078  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:45.324144  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:45.354569  291455 cri.go:89] found id: ""
	I1212 01:36:45.354595  291455 logs.go:282] 0 containers: []
	W1212 01:36:45.354612  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:45.354619  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:45.354697  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:45.380068  291455 cri.go:89] found id: ""
	I1212 01:36:45.380133  291455 logs.go:282] 0 containers: []
	W1212 01:36:45.380160  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:45.380175  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:45.380249  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:45.403554  291455 cri.go:89] found id: ""
	I1212 01:36:45.403620  291455 logs.go:282] 0 containers: []
	W1212 01:36:45.403643  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:45.403664  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:45.403746  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:45.426534  291455 cri.go:89] found id: ""
	I1212 01:36:45.426560  291455 logs.go:282] 0 containers: []
	W1212 01:36:45.426568  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:45.426574  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:45.426637  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:45.455346  291455 cri.go:89] found id: ""
	I1212 01:36:45.455414  291455 logs.go:282] 0 containers: []
	W1212 01:36:45.455438  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:45.455457  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:45.455469  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:45.510486  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:45.510521  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:45.523916  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:45.523944  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:45.642152  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:45.624680    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:45.625385    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:45.635164    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:45.635878    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:45.637755    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:45.624680    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:45.625385    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:45.635164    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:45.635878    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:45.637755    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:45.642173  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:45.642186  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:45.667625  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:45.667661  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:36:46.535816  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:48.537737  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:48.197188  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:48.208199  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:48.208272  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:48.236943  291455 cri.go:89] found id: ""
	I1212 01:36:48.236969  291455 logs.go:282] 0 containers: []
	W1212 01:36:48.236977  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:48.236984  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:48.237048  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:48.262444  291455 cri.go:89] found id: ""
	I1212 01:36:48.262468  291455 logs.go:282] 0 containers: []
	W1212 01:36:48.262477  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:48.262483  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:48.262545  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:48.292262  291455 cri.go:89] found id: ""
	I1212 01:36:48.292292  291455 logs.go:282] 0 containers: []
	W1212 01:36:48.292301  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:48.292307  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:48.292370  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:48.318028  291455 cri.go:89] found id: ""
	I1212 01:36:48.318053  291455 logs.go:282] 0 containers: []
	W1212 01:36:48.318063  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:48.318069  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:48.318128  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:48.343500  291455 cri.go:89] found id: ""
	I1212 01:36:48.343524  291455 logs.go:282] 0 containers: []
	W1212 01:36:48.343532  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:48.343539  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:48.343620  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:48.374537  291455 cri.go:89] found id: ""
	I1212 01:36:48.374563  291455 logs.go:282] 0 containers: []
	W1212 01:36:48.374572  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:48.374578  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:48.374657  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:48.399165  291455 cri.go:89] found id: ""
	I1212 01:36:48.399188  291455 logs.go:282] 0 containers: []
	W1212 01:36:48.399197  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:48.399203  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:48.399265  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:48.424429  291455 cri.go:89] found id: ""
	I1212 01:36:48.424452  291455 logs.go:282] 0 containers: []
	W1212 01:36:48.424460  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:48.424469  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:48.424482  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:48.450297  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:48.450336  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:48.477992  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:48.478017  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:48.533513  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:48.533546  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:48.554972  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:48.555078  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:48.639199  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:48.628523    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:48.629323    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:48.630881    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:48.631460    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:48.634979    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:48.628523    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:48.629323    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:48.630881    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:48.631460    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:48.634979    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:51.139443  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:51.152801  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:51.152869  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:51.181036  291455 cri.go:89] found id: ""
	I1212 01:36:51.181060  291455 logs.go:282] 0 containers: []
	W1212 01:36:51.181069  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:51.181076  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:51.181139  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:51.205637  291455 cri.go:89] found id: ""
	I1212 01:36:51.205664  291455 logs.go:282] 0 containers: []
	W1212 01:36:51.205673  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:51.205680  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:51.205744  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:51.230375  291455 cri.go:89] found id: ""
	I1212 01:36:51.230401  291455 logs.go:282] 0 containers: []
	W1212 01:36:51.230410  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:51.230416  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:51.230479  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:51.260594  291455 cri.go:89] found id: ""
	I1212 01:36:51.260620  291455 logs.go:282] 0 containers: []
	W1212 01:36:51.260629  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:51.260636  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:51.260693  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:51.286513  291455 cri.go:89] found id: ""
	I1212 01:36:51.286538  291455 logs.go:282] 0 containers: []
	W1212 01:36:51.286548  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:51.286554  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:51.286613  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:51.320488  291455 cri.go:89] found id: ""
	I1212 01:36:51.320511  291455 logs.go:282] 0 containers: []
	W1212 01:36:51.320519  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:51.320526  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:51.320593  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	W1212 01:36:51.035818  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:53.036491  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:55.036601  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:51.346751  291455 cri.go:89] found id: ""
	I1212 01:36:51.346773  291455 logs.go:282] 0 containers: []
	W1212 01:36:51.346782  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:51.346788  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:51.346848  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:51.372774  291455 cri.go:89] found id: ""
	I1212 01:36:51.372797  291455 logs.go:282] 0 containers: []
	W1212 01:36:51.372805  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:51.372820  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:51.372832  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:51.397287  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:51.397322  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:51.424395  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:51.424423  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:51.484364  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:51.484400  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:51.497751  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:51.497778  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:51.609432  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:51.593650    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:51.595213    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:51.596974    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:51.601995    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:51.602562    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:51.593650    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:51.595213    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:51.596974    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:51.601995    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:51.602562    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:54.111055  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:54.123333  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:54.123404  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:54.147152  291455 cri.go:89] found id: ""
	I1212 01:36:54.147218  291455 logs.go:282] 0 containers: []
	W1212 01:36:54.147246  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:54.147268  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:54.147370  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:54.172120  291455 cri.go:89] found id: ""
	I1212 01:36:54.172186  291455 logs.go:282] 0 containers: []
	W1212 01:36:54.172212  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:54.172233  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:54.172318  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:54.199177  291455 cri.go:89] found id: ""
	I1212 01:36:54.199242  291455 logs.go:282] 0 containers: []
	W1212 01:36:54.199262  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:54.199269  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:54.199346  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:54.223691  291455 cri.go:89] found id: ""
	I1212 01:36:54.223716  291455 logs.go:282] 0 containers: []
	W1212 01:36:54.223724  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:54.223731  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:54.223796  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:54.248969  291455 cri.go:89] found id: ""
	I1212 01:36:54.248991  291455 logs.go:282] 0 containers: []
	W1212 01:36:54.249000  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:54.249007  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:54.249076  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:54.274124  291455 cri.go:89] found id: ""
	I1212 01:36:54.274149  291455 logs.go:282] 0 containers: []
	W1212 01:36:54.274158  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:54.274165  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:54.274223  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:54.299049  291455 cri.go:89] found id: ""
	I1212 01:36:54.299071  291455 logs.go:282] 0 containers: []
	W1212 01:36:54.299079  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:54.299085  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:54.299142  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:54.323692  291455 cri.go:89] found id: ""
	I1212 01:36:54.323727  291455 logs.go:282] 0 containers: []
	W1212 01:36:54.323736  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:54.323745  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:54.323757  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:54.337075  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:54.337102  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:54.405905  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:54.396717    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:54.397409    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:54.399032    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:54.399536    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:54.401700    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:54.396717    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:54.397409    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:54.399032    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:54.399536    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:54.401700    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:54.405927  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:54.405938  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:54.432446  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:54.432489  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:54.461143  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:54.461170  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1212 01:36:57.536480  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:59.536672  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:57.017892  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:57.031680  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:57.031754  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:57.058619  291455 cri.go:89] found id: ""
	I1212 01:36:57.058644  291455 logs.go:282] 0 containers: []
	W1212 01:36:57.058661  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:57.058670  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:57.058744  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:57.082470  291455 cri.go:89] found id: ""
	I1212 01:36:57.082496  291455 logs.go:282] 0 containers: []
	W1212 01:36:57.082505  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:57.082511  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:57.082569  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:57.107129  291455 cri.go:89] found id: ""
	I1212 01:36:57.107152  291455 logs.go:282] 0 containers: []
	W1212 01:36:57.107161  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:57.107174  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:57.107235  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:57.131240  291455 cri.go:89] found id: ""
	I1212 01:36:57.131264  291455 logs.go:282] 0 containers: []
	W1212 01:36:57.131272  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:57.131282  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:57.131339  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:57.161702  291455 cri.go:89] found id: ""
	I1212 01:36:57.161728  291455 logs.go:282] 0 containers: []
	W1212 01:36:57.161737  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:57.161743  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:57.161800  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:57.186568  291455 cri.go:89] found id: ""
	I1212 01:36:57.186592  291455 logs.go:282] 0 containers: []
	W1212 01:36:57.186601  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:57.186607  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:57.186724  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:57.211286  291455 cri.go:89] found id: ""
	I1212 01:36:57.211310  291455 logs.go:282] 0 containers: []
	W1212 01:36:57.211319  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:57.211325  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:57.211382  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:57.236370  291455 cri.go:89] found id: ""
	I1212 01:36:57.236394  291455 logs.go:282] 0 containers: []
	W1212 01:36:57.236403  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:57.236412  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:57.236423  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:57.292504  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:57.292539  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:57.306287  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:57.306314  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:57.369836  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:57.361540    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:57.362207    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:57.363914    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:57.364465    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:57.366079    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:57.361540    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:57.362207    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:57.363914    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:57.364465    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:57.366079    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:57.369856  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:57.369870  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:57.395588  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:57.395625  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:59.923774  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:59.935843  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:59.935936  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:59.961362  291455 cri.go:89] found id: ""
	I1212 01:36:59.961383  291455 logs.go:282] 0 containers: []
	W1212 01:36:59.961392  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:59.961398  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:59.961453  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:59.987418  291455 cri.go:89] found id: ""
	I1212 01:36:59.987448  291455 logs.go:282] 0 containers: []
	W1212 01:36:59.987458  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:59.987463  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:59.987521  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:00.083321  291455 cri.go:89] found id: ""
	I1212 01:37:00.083352  291455 logs.go:282] 0 containers: []
	W1212 01:37:00.083362  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:00.083369  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:00.083456  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:00.200170  291455 cri.go:89] found id: ""
	I1212 01:37:00.200535  291455 logs.go:282] 0 containers: []
	W1212 01:37:00.200580  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:00.200686  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:00.201034  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:00.291145  291455 cri.go:89] found id: ""
	I1212 01:37:00.291235  291455 logs.go:282] 0 containers: []
	W1212 01:37:00.291284  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:00.291318  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:00.291414  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:00.393558  291455 cri.go:89] found id: ""
	I1212 01:37:00.393606  291455 logs.go:282] 0 containers: []
	W1212 01:37:00.393618  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:00.393626  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:00.393706  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:00.423985  291455 cri.go:89] found id: ""
	I1212 01:37:00.424023  291455 logs.go:282] 0 containers: []
	W1212 01:37:00.424035  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:00.424041  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:00.424117  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:00.451670  291455 cri.go:89] found id: ""
	I1212 01:37:00.451695  291455 logs.go:282] 0 containers: []
	W1212 01:37:00.451705  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:00.451715  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:00.451728  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:00.509577  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:00.509614  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:00.525099  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:00.525133  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:00.635419  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:00.627409    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:00.628095    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:00.629751    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:00.630057    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:00.631588    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:00.627409    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:00.628095    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:00.629751    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:00.630057    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:00.631588    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:00.635455  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:00.635468  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:00.663944  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:00.663984  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:37:02.037994  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:04.536623  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:03.194688  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:03.205352  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:03.205425  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:03.233099  291455 cri.go:89] found id: ""
	I1212 01:37:03.233131  291455 logs.go:282] 0 containers: []
	W1212 01:37:03.233140  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:03.233146  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:03.233217  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:03.257676  291455 cri.go:89] found id: ""
	I1212 01:37:03.257700  291455 logs.go:282] 0 containers: []
	W1212 01:37:03.257710  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:03.257716  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:03.257802  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:03.282622  291455 cri.go:89] found id: ""
	I1212 01:37:03.282696  291455 logs.go:282] 0 containers: []
	W1212 01:37:03.282719  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:03.282739  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:03.282834  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:03.309162  291455 cri.go:89] found id: ""
	I1212 01:37:03.309190  291455 logs.go:282] 0 containers: []
	W1212 01:37:03.309199  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:03.309205  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:03.309265  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:03.334284  291455 cri.go:89] found id: ""
	I1212 01:37:03.334318  291455 logs.go:282] 0 containers: []
	W1212 01:37:03.334327  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:03.334334  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:03.334401  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:03.361255  291455 cri.go:89] found id: ""
	I1212 01:37:03.361281  291455 logs.go:282] 0 containers: []
	W1212 01:37:03.361290  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:03.361296  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:03.361376  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:03.386372  291455 cri.go:89] found id: ""
	I1212 01:37:03.386406  291455 logs.go:282] 0 containers: []
	W1212 01:37:03.386415  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:03.386421  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:03.386490  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:03.412127  291455 cri.go:89] found id: ""
	I1212 01:37:03.412151  291455 logs.go:282] 0 containers: []
	W1212 01:37:03.412160  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:03.412170  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:03.412181  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:03.467933  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:03.467980  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:03.481636  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:03.481663  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:03.565451  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:03.551611    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:03.552450    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:03.553999    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:03.554567    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:03.556109    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:03.551611    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:03.552450    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:03.553999    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:03.554567    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:03.556109    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:03.565476  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:03.565548  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:03.614744  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:03.614783  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:06.159160  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:06.169841  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:06.169916  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:06.196496  291455 cri.go:89] found id: ""
	I1212 01:37:06.196521  291455 logs.go:282] 0 containers: []
	W1212 01:37:06.196529  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:06.196536  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:06.196594  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:06.229404  291455 cri.go:89] found id: ""
	I1212 01:37:06.229429  291455 logs.go:282] 0 containers: []
	W1212 01:37:06.229438  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:06.229444  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:06.229505  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:06.254056  291455 cri.go:89] found id: ""
	I1212 01:37:06.254081  291455 logs.go:282] 0 containers: []
	W1212 01:37:06.254089  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:06.254095  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:06.254154  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:06.278424  291455 cri.go:89] found id: ""
	I1212 01:37:06.278453  291455 logs.go:282] 0 containers: []
	W1212 01:37:06.278462  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:06.278469  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:06.278527  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:06.302517  291455 cri.go:89] found id: ""
	I1212 01:37:06.302545  291455 logs.go:282] 0 containers: []
	W1212 01:37:06.302554  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:06.302560  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:06.302617  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:06.328634  291455 cri.go:89] found id: ""
	I1212 01:37:06.328657  291455 logs.go:282] 0 containers: []
	W1212 01:37:06.328665  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:06.328671  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:06.328728  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	W1212 01:37:07.035836  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:09.035916  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:06.352026  291455 cri.go:89] found id: ""
	I1212 01:37:06.352099  291455 logs.go:282] 0 containers: []
	W1212 01:37:06.352115  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:06.352125  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:06.352199  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:06.376075  291455 cri.go:89] found id: ""
	I1212 01:37:06.376101  291455 logs.go:282] 0 containers: []
	W1212 01:37:06.376110  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:06.376119  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:06.376130  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:06.400451  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:06.400481  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:06.428356  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:06.428385  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:06.484230  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:06.484267  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:06.498047  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:06.498074  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:06.610705  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:06.593235    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:06.594305    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:06.599655    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:06.603092    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:06.603422    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:06.593235    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:06.594305    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:06.599655    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:06.603092    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:06.603422    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:09.111534  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:09.121786  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:09.121855  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:09.148241  291455 cri.go:89] found id: ""
	I1212 01:37:09.148267  291455 logs.go:282] 0 containers: []
	W1212 01:37:09.148275  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:09.148282  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:09.148341  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:09.172742  291455 cri.go:89] found id: ""
	I1212 01:37:09.172764  291455 logs.go:282] 0 containers: []
	W1212 01:37:09.172773  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:09.172779  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:09.172835  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:09.197560  291455 cri.go:89] found id: ""
	I1212 01:37:09.197586  291455 logs.go:282] 0 containers: []
	W1212 01:37:09.197595  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:09.197601  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:09.197673  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:09.222352  291455 cri.go:89] found id: ""
	I1212 01:37:09.222377  291455 logs.go:282] 0 containers: []
	W1212 01:37:09.222386  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:09.222392  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:09.222450  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:09.246770  291455 cri.go:89] found id: ""
	I1212 01:37:09.246794  291455 logs.go:282] 0 containers: []
	W1212 01:37:09.246802  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:09.246809  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:09.246875  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:09.273237  291455 cri.go:89] found id: ""
	I1212 01:37:09.273260  291455 logs.go:282] 0 containers: []
	W1212 01:37:09.273268  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:09.273275  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:09.273342  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:09.298382  291455 cri.go:89] found id: ""
	I1212 01:37:09.298405  291455 logs.go:282] 0 containers: []
	W1212 01:37:09.298414  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:09.298421  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:09.298479  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:09.326366  291455 cri.go:89] found id: ""
	I1212 01:37:09.326388  291455 logs.go:282] 0 containers: []
	W1212 01:37:09.326396  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:09.326405  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:09.326416  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:09.339892  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:09.339920  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:09.408533  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:09.399583    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:09.400465    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:09.402243    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:09.402860    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:09.404361    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:09.399583    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:09.400465    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:09.402243    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:09.402860    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:09.404361    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:09.408555  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:09.408568  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:09.434113  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:09.434149  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:09.469040  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:09.469065  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1212 01:37:11.036562  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:13.536873  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:12.025102  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:12.036649  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:12.036722  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:12.064882  291455 cri.go:89] found id: ""
	I1212 01:37:12.064905  291455 logs.go:282] 0 containers: []
	W1212 01:37:12.064913  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:12.064919  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:12.064979  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:12.090328  291455 cri.go:89] found id: ""
	I1212 01:37:12.090354  291455 logs.go:282] 0 containers: []
	W1212 01:37:12.090362  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:12.090369  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:12.090429  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:12.115640  291455 cri.go:89] found id: ""
	I1212 01:37:12.115665  291455 logs.go:282] 0 containers: []
	W1212 01:37:12.115674  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:12.115680  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:12.115741  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:12.140726  291455 cri.go:89] found id: ""
	I1212 01:37:12.140752  291455 logs.go:282] 0 containers: []
	W1212 01:37:12.140773  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:12.140810  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:12.140900  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:12.165182  291455 cri.go:89] found id: ""
	I1212 01:37:12.165208  291455 logs.go:282] 0 containers: []
	W1212 01:37:12.165216  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:12.165223  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:12.165282  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:12.189365  291455 cri.go:89] found id: ""
	I1212 01:37:12.189389  291455 logs.go:282] 0 containers: []
	W1212 01:37:12.189398  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:12.189405  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:12.189463  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:12.214048  291455 cri.go:89] found id: ""
	I1212 01:37:12.214073  291455 logs.go:282] 0 containers: []
	W1212 01:37:12.214082  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:12.214088  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:12.214148  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:12.240794  291455 cri.go:89] found id: ""
	I1212 01:37:12.240821  291455 logs.go:282] 0 containers: []
	W1212 01:37:12.240830  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:12.240840  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:12.240851  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:12.300894  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:12.300936  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:12.314783  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:12.314817  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:12.382362  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:12.373621    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:12.374371    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:12.376069    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:12.376636    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:12.378249    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:12.373621    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:12.374371    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:12.376069    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:12.376636    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:12.378249    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:12.382385  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:12.382397  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:12.408884  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:12.408921  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:14.444251  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:37:14.509220  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:37:14.509386  291455 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 01:37:14.942929  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:14.953301  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:14.953373  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:14.977865  291455 cri.go:89] found id: ""
	I1212 01:37:14.977933  291455 logs.go:282] 0 containers: []
	W1212 01:37:14.977947  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:14.977954  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:14.978019  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:15.012296  291455 cri.go:89] found id: ""
	I1212 01:37:15.012325  291455 logs.go:282] 0 containers: []
	W1212 01:37:15.012335  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:15.012342  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:15.012414  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:15.044602  291455 cri.go:89] found id: ""
	I1212 01:37:15.044629  291455 logs.go:282] 0 containers: []
	W1212 01:37:15.044638  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:15.044644  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:15.044705  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:15.072008  291455 cri.go:89] found id: ""
	I1212 01:37:15.072035  291455 logs.go:282] 0 containers: []
	W1212 01:37:15.072043  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:15.072049  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:15.072112  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:15.098264  291455 cri.go:89] found id: ""
	I1212 01:37:15.098293  291455 logs.go:282] 0 containers: []
	W1212 01:37:15.098308  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:15.098316  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:15.098390  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:15.124176  291455 cri.go:89] found id: ""
	I1212 01:37:15.124203  291455 logs.go:282] 0 containers: []
	W1212 01:37:15.124212  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:15.124218  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:15.124278  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:15.148763  291455 cri.go:89] found id: ""
	I1212 01:37:15.148788  291455 logs.go:282] 0 containers: []
	W1212 01:37:15.148797  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:15.148803  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:15.148880  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:15.173843  291455 cri.go:89] found id: ""
	I1212 01:37:15.173870  291455 logs.go:282] 0 containers: []
	W1212 01:37:15.173879  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:15.173889  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:15.173901  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:15.203728  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:15.203757  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:15.259019  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:15.259053  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:15.272480  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:15.272509  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:15.337558  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:15.329071    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:15.329763    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:15.331497    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:15.332089    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:15.333695    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:15.329071    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:15.329763    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:15.331497    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:15.332089    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:15.333695    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:15.337580  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:15.337592  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:17.027133  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:37:17.109229  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:37:17.109319  291455 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 01:37:17.112386  291455 out.go:179] * Enabled addons: 
	W1212 01:37:16.035841  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:18.035966  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:20.036082  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:17.115266  291455 addons.go:530] duration metric: took 1m58.649036473s for enable addons: enabled=[]
	I1212 01:37:17.864277  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:17.875687  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:17.875762  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:17.900504  291455 cri.go:89] found id: ""
	I1212 01:37:17.900527  291455 logs.go:282] 0 containers: []
	W1212 01:37:17.900536  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:17.900542  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:17.900626  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:17.925113  291455 cri.go:89] found id: ""
	I1212 01:37:17.925136  291455 logs.go:282] 0 containers: []
	W1212 01:37:17.925145  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:17.925151  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:17.925238  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:17.950585  291455 cri.go:89] found id: ""
	I1212 01:37:17.950611  291455 logs.go:282] 0 containers: []
	W1212 01:37:17.950620  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:17.950626  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:17.950687  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:17.977787  291455 cri.go:89] found id: ""
	I1212 01:37:17.977813  291455 logs.go:282] 0 containers: []
	W1212 01:37:17.977822  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:17.977828  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:17.977888  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:18.006885  291455 cri.go:89] found id: ""
	I1212 01:37:18.006967  291455 logs.go:282] 0 containers: []
	W1212 01:37:18.007019  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:18.007043  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:18.007118  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:18.033137  291455 cri.go:89] found id: ""
	I1212 01:37:18.033161  291455 logs.go:282] 0 containers: []
	W1212 01:37:18.033170  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:18.033176  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:18.033238  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:18.058968  291455 cri.go:89] found id: ""
	I1212 01:37:18.059009  291455 logs.go:282] 0 containers: []
	W1212 01:37:18.059019  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:18.059025  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:18.059087  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:18.084927  291455 cri.go:89] found id: ""
	I1212 01:37:18.084961  291455 logs.go:282] 0 containers: []
	W1212 01:37:18.084971  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:18.084981  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:18.084994  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:18.153070  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:18.145061    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:18.145891    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:18.147207    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:18.147819    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:18.149000    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:18.145061    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:18.145891    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:18.147207    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:18.147819    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:18.149000    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:18.153101  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:18.153113  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:18.178193  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:18.178227  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:18.205844  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:18.205874  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:18.261619  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:18.261657  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:20.775910  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:20.797119  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:20.797192  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:20.870519  291455 cri.go:89] found id: ""
	I1212 01:37:20.870556  291455 logs.go:282] 0 containers: []
	W1212 01:37:20.870566  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:20.870573  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:20.870642  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:20.895021  291455 cri.go:89] found id: ""
	I1212 01:37:20.895044  291455 logs.go:282] 0 containers: []
	W1212 01:37:20.895053  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:20.895059  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:20.895119  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:20.918242  291455 cri.go:89] found id: ""
	I1212 01:37:20.918270  291455 logs.go:282] 0 containers: []
	W1212 01:37:20.918279  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:20.918286  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:20.918340  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:20.942755  291455 cri.go:89] found id: ""
	I1212 01:37:20.942781  291455 logs.go:282] 0 containers: []
	W1212 01:37:20.942790  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:20.942796  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:20.942855  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:20.966487  291455 cri.go:89] found id: ""
	I1212 01:37:20.966551  291455 logs.go:282] 0 containers: []
	W1212 01:37:20.966574  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:20.966595  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:20.966680  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:20.992848  291455 cri.go:89] found id: ""
	I1212 01:37:20.992922  291455 logs.go:282] 0 containers: []
	W1212 01:37:20.992945  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:20.992959  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:20.993035  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:21.025558  291455 cri.go:89] found id: ""
	I1212 01:37:21.025587  291455 logs.go:282] 0 containers: []
	W1212 01:37:21.025596  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:21.025602  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:21.025663  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:21.050967  291455 cri.go:89] found id: ""
	I1212 01:37:21.051023  291455 logs.go:282] 0 containers: []
	W1212 01:37:21.051032  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:21.051041  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:21.051057  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:21.077368  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:21.077396  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:21.133503  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:21.133538  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:21.147218  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:21.147245  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:21.209763  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:21.201479    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:21.202138    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:21.203803    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:21.204409    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:21.205960    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:21.201479    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:21.202138    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:21.203803    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:21.204409    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:21.205960    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:21.209786  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:21.209799  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1212 01:37:22.036593  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:24.536591  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:23.737746  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:23.747983  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:23.748051  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:23.772289  291455 cri.go:89] found id: ""
	I1212 01:37:23.772315  291455 logs.go:282] 0 containers: []
	W1212 01:37:23.772333  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:23.772341  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:23.772420  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:23.848280  291455 cri.go:89] found id: ""
	I1212 01:37:23.848306  291455 logs.go:282] 0 containers: []
	W1212 01:37:23.848315  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:23.848322  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:23.848386  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:23.884675  291455 cri.go:89] found id: ""
	I1212 01:37:23.884700  291455 logs.go:282] 0 containers: []
	W1212 01:37:23.884709  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:23.884715  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:23.884777  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:23.914530  291455 cri.go:89] found id: ""
	I1212 01:37:23.914553  291455 logs.go:282] 0 containers: []
	W1212 01:37:23.914561  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:23.914569  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:23.914626  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:23.940203  291455 cri.go:89] found id: ""
	I1212 01:37:23.940275  291455 logs.go:282] 0 containers: []
	W1212 01:37:23.940292  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:23.940299  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:23.940364  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:23.968920  291455 cri.go:89] found id: ""
	I1212 01:37:23.968944  291455 logs.go:282] 0 containers: []
	W1212 01:37:23.968952  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:23.968959  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:23.969016  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:23.993883  291455 cri.go:89] found id: ""
	I1212 01:37:23.993910  291455 logs.go:282] 0 containers: []
	W1212 01:37:23.993919  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:23.993925  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:23.993985  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:24.019876  291455 cri.go:89] found id: ""
	I1212 01:37:24.019901  291455 logs.go:282] 0 containers: []
	W1212 01:37:24.019909  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:24.019922  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:24.019935  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:24.052560  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:24.052586  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:24.107812  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:24.107847  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:24.121870  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:24.121902  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:24.193432  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:24.184434    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:24.184974    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:24.185943    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:24.187426    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:24.187845    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:24.184434    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:24.184974    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:24.185943    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:24.187426    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:24.187845    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:24.193458  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:24.193471  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1212 01:37:26.536664  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:29.036444  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:26.720901  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:26.732114  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:26.732194  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:26.759421  291455 cri.go:89] found id: ""
	I1212 01:37:26.759443  291455 logs.go:282] 0 containers: []
	W1212 01:37:26.759451  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:26.759458  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:26.759523  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:26.801227  291455 cri.go:89] found id: ""
	I1212 01:37:26.801252  291455 logs.go:282] 0 containers: []
	W1212 01:37:26.801261  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:26.801290  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:26.801371  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:26.836143  291455 cri.go:89] found id: ""
	I1212 01:37:26.836168  291455 logs.go:282] 0 containers: []
	W1212 01:37:26.836178  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:26.836184  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:26.836276  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:26.880334  291455 cri.go:89] found id: ""
	I1212 01:37:26.880373  291455 logs.go:282] 0 containers: []
	W1212 01:37:26.880382  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:26.880388  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:26.880477  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:26.915704  291455 cri.go:89] found id: ""
	I1212 01:37:26.915769  291455 logs.go:282] 0 containers: []
	W1212 01:37:26.915786  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:26.915793  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:26.915864  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:26.943219  291455 cri.go:89] found id: ""
	I1212 01:37:26.943252  291455 logs.go:282] 0 containers: []
	W1212 01:37:26.943262  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:26.943269  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:26.943350  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:26.968790  291455 cri.go:89] found id: ""
	I1212 01:37:26.968867  291455 logs.go:282] 0 containers: []
	W1212 01:37:26.968882  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:26.968889  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:26.968946  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:26.993867  291455 cri.go:89] found id: ""
	I1212 01:37:26.993892  291455 logs.go:282] 0 containers: []
	W1212 01:37:26.993908  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:26.993918  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:26.993929  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:27.025483  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:27.025547  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:27.081672  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:27.081704  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:27.095698  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:27.095724  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:27.161161  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:27.151369    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:27.152034    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:27.153696    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:27.156078    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:27.157312    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:27.151369    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:27.152034    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:27.153696    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:27.156078    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:27.157312    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:27.161189  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:27.161202  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:29.686768  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:29.699055  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:29.699131  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:29.725025  291455 cri.go:89] found id: ""
	I1212 01:37:29.725050  291455 logs.go:282] 0 containers: []
	W1212 01:37:29.725059  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:29.725065  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:29.725140  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:29.749378  291455 cri.go:89] found id: ""
	I1212 01:37:29.749401  291455 logs.go:282] 0 containers: []
	W1212 01:37:29.749410  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:29.749416  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:29.749481  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:29.773953  291455 cri.go:89] found id: ""
	I1212 01:37:29.773978  291455 logs.go:282] 0 containers: []
	W1212 01:37:29.773987  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:29.773993  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:29.774052  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:29.831695  291455 cri.go:89] found id: ""
	I1212 01:37:29.831723  291455 logs.go:282] 0 containers: []
	W1212 01:37:29.831732  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:29.831738  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:29.831794  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:29.881376  291455 cri.go:89] found id: ""
	I1212 01:37:29.881401  291455 logs.go:282] 0 containers: []
	W1212 01:37:29.881412  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:29.881418  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:29.881477  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:29.905463  291455 cri.go:89] found id: ""
	I1212 01:37:29.905497  291455 logs.go:282] 0 containers: []
	W1212 01:37:29.905506  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:29.905530  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:29.905618  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:29.929393  291455 cri.go:89] found id: ""
	I1212 01:37:29.929427  291455 logs.go:282] 0 containers: []
	W1212 01:37:29.929436  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:29.929442  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:29.929507  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:29.956794  291455 cri.go:89] found id: ""
	I1212 01:37:29.956820  291455 logs.go:282] 0 containers: []
	W1212 01:37:29.956829  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:29.956839  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:29.956850  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:29.981845  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:29.981878  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:30.037712  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:30.037751  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:30.096286  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:30.096320  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:30.111120  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:30.111160  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:30.180653  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:30.171653    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:30.172384    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:30.174167    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:30.174765    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:30.176527    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:30.171653    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:30.172384    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:30.174167    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:30.174765    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:30.176527    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1212 01:37:31.535946  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:33.536464  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:32.681768  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:32.693283  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:32.693354  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:32.720606  291455 cri.go:89] found id: ""
	I1212 01:37:32.720629  291455 logs.go:282] 0 containers: []
	W1212 01:37:32.720638  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:32.720644  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:32.720703  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:32.747145  291455 cri.go:89] found id: ""
	I1212 01:37:32.747167  291455 logs.go:282] 0 containers: []
	W1212 01:37:32.747177  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:32.747185  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:32.747243  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:32.772037  291455 cri.go:89] found id: ""
	I1212 01:37:32.772061  291455 logs.go:282] 0 containers: []
	W1212 01:37:32.772070  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:32.772076  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:32.772134  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:32.862885  291455 cri.go:89] found id: ""
	I1212 01:37:32.862910  291455 logs.go:282] 0 containers: []
	W1212 01:37:32.862919  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:32.862925  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:32.862983  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:32.888016  291455 cri.go:89] found id: ""
	I1212 01:37:32.888038  291455 logs.go:282] 0 containers: []
	W1212 01:37:32.888049  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:32.888055  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:32.888115  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:32.912450  291455 cri.go:89] found id: ""
	I1212 01:37:32.912472  291455 logs.go:282] 0 containers: []
	W1212 01:37:32.912481  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:32.912487  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:32.912544  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:32.935759  291455 cri.go:89] found id: ""
	I1212 01:37:32.935781  291455 logs.go:282] 0 containers: []
	W1212 01:37:32.935790  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:32.935797  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:32.935855  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:32.963827  291455 cri.go:89] found id: ""
	I1212 01:37:32.963850  291455 logs.go:282] 0 containers: []
	W1212 01:37:32.963858  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:32.963869  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:32.963880  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:32.988758  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:32.988788  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:33.021942  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:33.021973  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:33.078907  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:33.078940  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:33.094242  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:33.094270  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:33.157981  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:33.149433    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:33.150328    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:33.151907    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:33.152360    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:33.153844    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:33.149433    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:33.150328    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:33.151907    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:33.152360    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:33.153844    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:35.659737  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:35.672022  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:35.672098  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:35.701308  291455 cri.go:89] found id: ""
	I1212 01:37:35.701334  291455 logs.go:282] 0 containers: []
	W1212 01:37:35.701343  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:35.701349  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:35.701408  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:35.726385  291455 cri.go:89] found id: ""
	I1212 01:37:35.726409  291455 logs.go:282] 0 containers: []
	W1212 01:37:35.726418  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:35.726424  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:35.726482  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:35.751557  291455 cri.go:89] found id: ""
	I1212 01:37:35.751593  291455 logs.go:282] 0 containers: []
	W1212 01:37:35.751604  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:35.751610  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:35.751679  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:35.776892  291455 cri.go:89] found id: ""
	I1212 01:37:35.776956  291455 logs.go:282] 0 containers: []
	W1212 01:37:35.776971  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:35.776982  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:35.777044  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:35.824076  291455 cri.go:89] found id: ""
	I1212 01:37:35.824107  291455 logs.go:282] 0 containers: []
	W1212 01:37:35.824116  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:35.824122  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:35.824179  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:35.880084  291455 cri.go:89] found id: ""
	I1212 01:37:35.880107  291455 logs.go:282] 0 containers: []
	W1212 01:37:35.880115  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:35.880122  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:35.880192  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:35.907066  291455 cri.go:89] found id: ""
	I1212 01:37:35.907091  291455 logs.go:282] 0 containers: []
	W1212 01:37:35.907099  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:35.907105  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:35.907166  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:35.936636  291455 cri.go:89] found id: ""
	I1212 01:37:35.936713  291455 logs.go:282] 0 containers: []
	W1212 01:37:35.936729  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:35.936739  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:35.936750  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:35.993085  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:35.993119  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:36.007767  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:36.007856  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:36.076959  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:36.068314    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:36.068888    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:36.070632    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:36.071390    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:36.072929    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:36.068314    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:36.068888    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:36.070632    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:36.071390    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:36.072929    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:36.076984  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:36.076997  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:36.103429  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:36.103463  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:37:36.036277  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:38.536154  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:38.632890  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:38.643831  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:38.643909  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:38.671085  291455 cri.go:89] found id: ""
	I1212 01:37:38.671108  291455 logs.go:282] 0 containers: []
	W1212 01:37:38.671116  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:38.671122  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:38.671182  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:38.694933  291455 cri.go:89] found id: ""
	I1212 01:37:38.694958  291455 logs.go:282] 0 containers: []
	W1212 01:37:38.694966  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:38.694972  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:38.695070  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:38.723033  291455 cri.go:89] found id: ""
	I1212 01:37:38.723060  291455 logs.go:282] 0 containers: []
	W1212 01:37:38.723069  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:38.723075  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:38.723135  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:38.748068  291455 cri.go:89] found id: ""
	I1212 01:37:38.748093  291455 logs.go:282] 0 containers: []
	W1212 01:37:38.748102  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:38.748109  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:38.748169  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:38.778336  291455 cri.go:89] found id: ""
	I1212 01:37:38.778362  291455 logs.go:282] 0 containers: []
	W1212 01:37:38.778371  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:38.778377  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:38.778438  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:38.824425  291455 cri.go:89] found id: ""
	I1212 01:37:38.824452  291455 logs.go:282] 0 containers: []
	W1212 01:37:38.824461  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:38.824468  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:38.824526  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:38.869581  291455 cri.go:89] found id: ""
	I1212 01:37:38.869607  291455 logs.go:282] 0 containers: []
	W1212 01:37:38.869616  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:38.869623  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:38.869684  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:38.898375  291455 cri.go:89] found id: ""
	I1212 01:37:38.898401  291455 logs.go:282] 0 containers: []
	W1212 01:37:38.898411  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:38.898420  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:38.898431  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:38.924559  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:38.924594  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:38.954848  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:38.954884  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:39.010528  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:39.010564  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:39.024383  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:39.024412  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:39.090716  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:39.082311    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:39.082890    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:39.084642    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:39.085084    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:39.086585    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:39.082311    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:39.082890    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:39.084642    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:39.085084    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:39.086585    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1212 01:37:40.536718  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:43.036535  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:45.036776  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:41.591539  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:41.602064  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:41.602135  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:41.626512  291455 cri.go:89] found id: ""
	I1212 01:37:41.626584  291455 logs.go:282] 0 containers: []
	W1212 01:37:41.626609  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:41.626629  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:41.626713  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:41.651218  291455 cri.go:89] found id: ""
	I1212 01:37:41.651294  291455 logs.go:282] 0 containers: []
	W1212 01:37:41.651317  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:41.651339  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:41.651429  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:41.676032  291455 cri.go:89] found id: ""
	I1212 01:37:41.676055  291455 logs.go:282] 0 containers: []
	W1212 01:37:41.676064  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:41.676070  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:41.676144  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:41.700472  291455 cri.go:89] found id: ""
	I1212 01:37:41.700495  291455 logs.go:282] 0 containers: []
	W1212 01:37:41.700509  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:41.700516  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:41.700573  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:41.728292  291455 cri.go:89] found id: ""
	I1212 01:37:41.728317  291455 logs.go:282] 0 containers: []
	W1212 01:37:41.728326  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:41.728332  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:41.728413  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:41.752458  291455 cri.go:89] found id: ""
	I1212 01:37:41.752496  291455 logs.go:282] 0 containers: []
	W1212 01:37:41.752508  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:41.752515  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:41.752687  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:41.778677  291455 cri.go:89] found id: ""
	I1212 01:37:41.778703  291455 logs.go:282] 0 containers: []
	W1212 01:37:41.778711  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:41.778717  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:41.778802  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:41.831103  291455 cri.go:89] found id: ""
	I1212 01:37:41.831129  291455 logs.go:282] 0 containers: []
	W1212 01:37:41.831138  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:41.831147  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:41.831158  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:41.922931  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:41.914201    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:41.914946    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:41.916560    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:41.917145    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:41.918787    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:41.914201    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:41.914946    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:41.916560    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:41.917145    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:41.918787    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:41.922954  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:41.922966  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:41.948574  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:41.948606  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:41.976883  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:41.976910  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:42.031740  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:42.031774  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:44.547156  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:44.557779  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:44.557852  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:44.585516  291455 cri.go:89] found id: ""
	I1212 01:37:44.585539  291455 logs.go:282] 0 containers: []
	W1212 01:37:44.585547  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:44.585554  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:44.585614  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:44.610080  291455 cri.go:89] found id: ""
	I1212 01:37:44.610146  291455 logs.go:282] 0 containers: []
	W1212 01:37:44.610170  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:44.610188  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:44.610282  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:44.634333  291455 cri.go:89] found id: ""
	I1212 01:37:44.634403  291455 logs.go:282] 0 containers: []
	W1212 01:37:44.634428  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:44.634449  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:44.634538  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:44.659415  291455 cri.go:89] found id: ""
	I1212 01:37:44.659441  291455 logs.go:282] 0 containers: []
	W1212 01:37:44.659450  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:44.659457  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:44.659518  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:44.688713  291455 cri.go:89] found id: ""
	I1212 01:37:44.688738  291455 logs.go:282] 0 containers: []
	W1212 01:37:44.688747  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:44.688753  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:44.688813  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:44.713219  291455 cri.go:89] found id: ""
	I1212 01:37:44.713245  291455 logs.go:282] 0 containers: []
	W1212 01:37:44.713262  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:44.713270  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:44.713334  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:44.736447  291455 cri.go:89] found id: ""
	I1212 01:37:44.736472  291455 logs.go:282] 0 containers: []
	W1212 01:37:44.736480  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:44.736486  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:44.736562  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:44.762258  291455 cri.go:89] found id: ""
	I1212 01:37:44.762283  291455 logs.go:282] 0 containers: []
	W1212 01:37:44.762292  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:44.762324  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:44.762341  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:44.839027  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:44.839065  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:44.856616  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:44.856643  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:44.936247  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:44.928242    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:44.928784    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:44.930267    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:44.930803    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:44.932347    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:44.928242    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:44.928784    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:44.930267    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:44.930803    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:44.932347    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:44.936278  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:44.936291  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:44.961626  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:44.961659  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:37:47.536481  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:49.536708  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:47.490976  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:47.501776  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:47.501852  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:47.532240  291455 cri.go:89] found id: ""
	I1212 01:37:47.532263  291455 logs.go:282] 0 containers: []
	W1212 01:37:47.532271  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:47.532276  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:47.532336  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:47.556453  291455 cri.go:89] found id: ""
	I1212 01:37:47.556475  291455 logs.go:282] 0 containers: []
	W1212 01:37:47.556484  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:47.556490  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:47.556551  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:47.580605  291455 cri.go:89] found id: ""
	I1212 01:37:47.580628  291455 logs.go:282] 0 containers: []
	W1212 01:37:47.580637  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:47.580643  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:47.580709  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:47.605106  291455 cri.go:89] found id: ""
	I1212 01:37:47.605130  291455 logs.go:282] 0 containers: []
	W1212 01:37:47.605139  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:47.605145  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:47.605224  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:47.630587  291455 cri.go:89] found id: ""
	I1212 01:37:47.630613  291455 logs.go:282] 0 containers: []
	W1212 01:37:47.630622  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:47.630629  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:47.630733  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:47.656391  291455 cri.go:89] found id: ""
	I1212 01:37:47.656416  291455 logs.go:282] 0 containers: []
	W1212 01:37:47.656424  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:47.656431  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:47.656489  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:47.680787  291455 cri.go:89] found id: ""
	I1212 01:37:47.680817  291455 logs.go:282] 0 containers: []
	W1212 01:37:47.680826  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:47.680832  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:47.680913  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:47.706371  291455 cri.go:89] found id: ""
	I1212 01:37:47.706396  291455 logs.go:282] 0 containers: []
	W1212 01:37:47.706405  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:47.706414  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:47.706458  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:47.763648  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:47.763687  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:47.777355  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:47.777383  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:47.899204  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:47.891161    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:47.891855    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:47.893228    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:47.893728    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:47.895403    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:47.891161    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:47.891855    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:47.893228    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:47.893728    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:47.895403    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:47.899226  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:47.899238  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:47.924220  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:47.924256  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:50.458301  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:50.468856  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:50.468926  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:50.493349  291455 cri.go:89] found id: ""
	I1212 01:37:50.493374  291455 logs.go:282] 0 containers: []
	W1212 01:37:50.493382  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:50.493388  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:50.493445  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:50.517926  291455 cri.go:89] found id: ""
	I1212 01:37:50.517951  291455 logs.go:282] 0 containers: []
	W1212 01:37:50.517960  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:50.517966  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:50.518026  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:50.546779  291455 cri.go:89] found id: ""
	I1212 01:37:50.546805  291455 logs.go:282] 0 containers: []
	W1212 01:37:50.546814  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:50.546819  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:50.546877  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:50.572059  291455 cri.go:89] found id: ""
	I1212 01:37:50.572086  291455 logs.go:282] 0 containers: []
	W1212 01:37:50.572102  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:50.572110  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:50.572173  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:50.596562  291455 cri.go:89] found id: ""
	I1212 01:37:50.596585  291455 logs.go:282] 0 containers: []
	W1212 01:37:50.596594  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:50.596601  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:50.596669  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:50.621102  291455 cri.go:89] found id: ""
	I1212 01:37:50.621124  291455 logs.go:282] 0 containers: []
	W1212 01:37:50.621132  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:50.621138  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:50.621196  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:50.645424  291455 cri.go:89] found id: ""
	I1212 01:37:50.645445  291455 logs.go:282] 0 containers: []
	W1212 01:37:50.645454  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:50.645461  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:50.645521  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:50.670456  291455 cri.go:89] found id: ""
	I1212 01:37:50.670479  291455 logs.go:282] 0 containers: []
	W1212 01:37:50.670487  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:50.670497  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:50.670508  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:50.726487  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:50.726519  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:50.740149  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:50.740178  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:50.846147  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:50.836239    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:50.837070    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:50.839024    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:50.839387    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:50.840598    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:50.836239    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:50.837070    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:50.839024    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:50.839387    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:50.840598    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:50.846174  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:50.846188  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:50.882509  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:50.882583  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:37:52.036566  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:54.036621  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:53.411213  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:53.421355  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:53.421422  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:53.444104  291455 cri.go:89] found id: ""
	I1212 01:37:53.444130  291455 logs.go:282] 0 containers: []
	W1212 01:37:53.444139  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:53.444146  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:53.444205  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:53.467938  291455 cri.go:89] found id: ""
	I1212 01:37:53.467963  291455 logs.go:282] 0 containers: []
	W1212 01:37:53.467972  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:53.467979  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:53.468038  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:53.492082  291455 cri.go:89] found id: ""
	I1212 01:37:53.492106  291455 logs.go:282] 0 containers: []
	W1212 01:37:53.492115  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:53.492122  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:53.492180  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:53.516011  291455 cri.go:89] found id: ""
	I1212 01:37:53.516040  291455 logs.go:282] 0 containers: []
	W1212 01:37:53.516049  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:53.516056  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:53.516115  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:53.543513  291455 cri.go:89] found id: ""
	I1212 01:37:53.543550  291455 logs.go:282] 0 containers: []
	W1212 01:37:53.543559  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:53.543565  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:53.543707  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:53.568681  291455 cri.go:89] found id: ""
	I1212 01:37:53.568705  291455 logs.go:282] 0 containers: []
	W1212 01:37:53.568713  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:53.568720  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:53.568797  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:53.593562  291455 cri.go:89] found id: ""
	I1212 01:37:53.593587  291455 logs.go:282] 0 containers: []
	W1212 01:37:53.593596  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:53.593602  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:53.593676  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:53.617634  291455 cri.go:89] found id: ""
	I1212 01:37:53.617658  291455 logs.go:282] 0 containers: []
	W1212 01:37:53.617667  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:53.617677  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:53.617691  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:53.672956  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:53.672991  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:53.686739  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:53.686767  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:53.753435  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:53.745274    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:53.746109    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:53.747777    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:53.748302    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:53.749767    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:53.745274    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:53.746109    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:53.747777    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:53.748302    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:53.749767    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:53.753456  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:53.753470  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:53.785303  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:53.785347  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:37:56.536427  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:59.036479  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:56.343327  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:56.353619  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:56.353686  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:56.377008  291455 cri.go:89] found id: ""
	I1212 01:37:56.377032  291455 logs.go:282] 0 containers: []
	W1212 01:37:56.377040  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:56.377047  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:56.377103  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:56.403572  291455 cri.go:89] found id: ""
	I1212 01:37:56.403599  291455 logs.go:282] 0 containers: []
	W1212 01:37:56.403607  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:56.403614  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:56.403677  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:56.427234  291455 cri.go:89] found id: ""
	I1212 01:37:56.427256  291455 logs.go:282] 0 containers: []
	W1212 01:37:56.427266  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:56.427272  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:56.427329  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:56.450300  291455 cri.go:89] found id: ""
	I1212 01:37:56.450325  291455 logs.go:282] 0 containers: []
	W1212 01:37:56.450334  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:56.450340  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:56.450399  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:56.478269  291455 cri.go:89] found id: ""
	I1212 01:37:56.478293  291455 logs.go:282] 0 containers: []
	W1212 01:37:56.478302  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:56.478308  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:56.478402  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:56.502839  291455 cri.go:89] found id: ""
	I1212 01:37:56.502863  291455 logs.go:282] 0 containers: []
	W1212 01:37:56.502872  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:56.502879  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:56.502939  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:56.528770  291455 cri.go:89] found id: ""
	I1212 01:37:56.528796  291455 logs.go:282] 0 containers: []
	W1212 01:37:56.528804  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:56.528810  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:56.528886  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:56.552625  291455 cri.go:89] found id: ""
	I1212 01:37:56.552687  291455 logs.go:282] 0 containers: []
	W1212 01:37:56.552701  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:56.552710  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:56.552722  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:56.582901  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:56.582929  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:56.638758  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:56.638790  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:56.652337  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:56.652364  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:56.718815  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:56.710468    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:56.711245    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:56.712862    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:56.713372    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:56.714933    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:56.710468    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:56.711245    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:56.712862    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:56.713372    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:56.714933    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:56.718853  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:56.718866  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:59.245105  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:59.255232  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:59.255300  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:59.280996  291455 cri.go:89] found id: ""
	I1212 01:37:59.281018  291455 logs.go:282] 0 containers: []
	W1212 01:37:59.281027  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:59.281033  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:59.281089  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:59.306870  291455 cri.go:89] found id: ""
	I1212 01:37:59.306893  291455 logs.go:282] 0 containers: []
	W1212 01:37:59.306901  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:59.306908  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:59.306967  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:59.332982  291455 cri.go:89] found id: ""
	I1212 01:37:59.333008  291455 logs.go:282] 0 containers: []
	W1212 01:37:59.333017  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:59.333022  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:59.333128  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:59.360799  291455 cri.go:89] found id: ""
	I1212 01:37:59.360824  291455 logs.go:282] 0 containers: []
	W1212 01:37:59.360833  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:59.360839  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:59.360897  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:59.383773  291455 cri.go:89] found id: ""
	I1212 01:37:59.383836  291455 logs.go:282] 0 containers: []
	W1212 01:37:59.383851  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:59.383858  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:59.383916  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:59.411933  291455 cri.go:89] found id: ""
	I1212 01:37:59.411958  291455 logs.go:282] 0 containers: []
	W1212 01:37:59.411966  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:59.411973  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:59.412073  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:59.437061  291455 cri.go:89] found id: ""
	I1212 01:37:59.437087  291455 logs.go:282] 0 containers: []
	W1212 01:37:59.437095  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:59.437102  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:59.437182  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:59.461853  291455 cri.go:89] found id: ""
	I1212 01:37:59.461877  291455 logs.go:282] 0 containers: []
	W1212 01:37:59.461886  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:59.461895  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:59.461907  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:59.493084  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:59.493111  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:59.549198  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:59.549229  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:59.562644  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:59.562674  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:59.627349  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:59.619195    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:59.619835    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:59.621508    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:59.622053    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:59.623671    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:59.619195    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:59.619835    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:59.621508    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:59.622053    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:59.623671    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:59.627373  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:59.627388  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1212 01:38:01.535866  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:03.536428  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:38:02.153040  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:02.163386  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:02.163465  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:02.188022  291455 cri.go:89] found id: ""
	I1212 01:38:02.188050  291455 logs.go:282] 0 containers: []
	W1212 01:38:02.188058  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:02.188064  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:02.188126  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:02.212051  291455 cri.go:89] found id: ""
	I1212 01:38:02.212088  291455 logs.go:282] 0 containers: []
	W1212 01:38:02.212097  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:02.212104  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:02.212163  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:02.236784  291455 cri.go:89] found id: ""
	I1212 01:38:02.236815  291455 logs.go:282] 0 containers: []
	W1212 01:38:02.236824  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:02.236831  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:02.236895  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:02.262277  291455 cri.go:89] found id: ""
	I1212 01:38:02.262301  291455 logs.go:282] 0 containers: []
	W1212 01:38:02.262310  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:02.262316  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:02.262375  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:02.286641  291455 cri.go:89] found id: ""
	I1212 01:38:02.286665  291455 logs.go:282] 0 containers: []
	W1212 01:38:02.286674  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:02.286680  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:02.286739  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:02.315696  291455 cri.go:89] found id: ""
	I1212 01:38:02.315721  291455 logs.go:282] 0 containers: []
	W1212 01:38:02.315729  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:02.315736  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:02.315796  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:02.341469  291455 cri.go:89] found id: ""
	I1212 01:38:02.341495  291455 logs.go:282] 0 containers: []
	W1212 01:38:02.341504  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:02.341511  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:02.341578  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:02.375601  291455 cri.go:89] found id: ""
	I1212 01:38:02.375626  291455 logs.go:282] 0 containers: []
	W1212 01:38:02.375634  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:02.375644  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:02.375656  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:02.388949  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:02.388978  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:02.458902  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:02.448758    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:02.449311    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:02.452630    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:02.453261    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:02.454829    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:02.448758    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:02.449311    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:02.452630    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:02.453261    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:02.454829    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:02.458924  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:02.458936  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:02.485359  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:02.485393  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:02.512676  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:02.512746  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:05.069728  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:05.084872  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:05.084975  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:05.130414  291455 cri.go:89] found id: ""
	I1212 01:38:05.130441  291455 logs.go:282] 0 containers: []
	W1212 01:38:05.130450  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:05.130457  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:05.130524  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:05.156129  291455 cri.go:89] found id: ""
	I1212 01:38:05.156154  291455 logs.go:282] 0 containers: []
	W1212 01:38:05.156163  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:05.156169  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:05.156230  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:05.182033  291455 cri.go:89] found id: ""
	I1212 01:38:05.182056  291455 logs.go:282] 0 containers: []
	W1212 01:38:05.182065  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:05.182071  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:05.182131  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:05.206795  291455 cri.go:89] found id: ""
	I1212 01:38:05.206821  291455 logs.go:282] 0 containers: []
	W1212 01:38:05.206830  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:05.206842  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:05.206903  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:05.231972  291455 cri.go:89] found id: ""
	I1212 01:38:05.231998  291455 logs.go:282] 0 containers: []
	W1212 01:38:05.232008  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:05.232014  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:05.232075  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:05.257476  291455 cri.go:89] found id: ""
	I1212 01:38:05.257501  291455 logs.go:282] 0 containers: []
	W1212 01:38:05.257509  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:05.257515  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:05.257576  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:05.282557  291455 cri.go:89] found id: ""
	I1212 01:38:05.282581  291455 logs.go:282] 0 containers: []
	W1212 01:38:05.282590  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:05.282595  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:05.282655  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:05.306866  291455 cri.go:89] found id: ""
	I1212 01:38:05.306891  291455 logs.go:282] 0 containers: []
	W1212 01:38:05.306899  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:05.306908  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:05.306919  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:05.363028  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:05.363073  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:05.376693  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:05.376722  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:05.445040  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:05.435873    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:05.436618    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:05.438470    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:05.439137    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:05.440737    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:05.435873    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:05.436618    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:05.438470    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:05.439137    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:05.440737    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:05.445059  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:05.445071  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:05.470893  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:05.470933  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:38:05.536804  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:08.035822  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:10.036632  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:38:08.000563  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:08.015628  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:08.015701  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:08.081620  291455 cri.go:89] found id: ""
	I1212 01:38:08.081643  291455 logs.go:282] 0 containers: []
	W1212 01:38:08.081652  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:08.081661  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:08.081736  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:08.129116  291455 cri.go:89] found id: ""
	I1212 01:38:08.129137  291455 logs.go:282] 0 containers: []
	W1212 01:38:08.129146  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:08.129152  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:08.129208  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:08.154760  291455 cri.go:89] found id: ""
	I1212 01:38:08.154781  291455 logs.go:282] 0 containers: []
	W1212 01:38:08.154790  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:08.154797  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:08.154853  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:08.181948  291455 cri.go:89] found id: ""
	I1212 01:38:08.181971  291455 logs.go:282] 0 containers: []
	W1212 01:38:08.181981  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:08.181988  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:08.182052  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:08.206310  291455 cri.go:89] found id: ""
	I1212 01:38:08.206335  291455 logs.go:282] 0 containers: []
	W1212 01:38:08.206345  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:08.206351  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:08.206413  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:08.230579  291455 cri.go:89] found id: ""
	I1212 01:38:08.230606  291455 logs.go:282] 0 containers: []
	W1212 01:38:08.230615  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:08.230624  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:08.230690  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:08.259888  291455 cri.go:89] found id: ""
	I1212 01:38:08.259913  291455 logs.go:282] 0 containers: []
	W1212 01:38:08.259922  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:08.259928  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:08.260006  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:08.284903  291455 cri.go:89] found id: ""
	I1212 01:38:08.284927  291455 logs.go:282] 0 containers: []
	W1212 01:38:08.284936  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:08.284945  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:08.284957  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:08.341529  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:08.341565  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:08.355353  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:08.355394  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:08.418766  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:08.409488    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:08.410375    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:08.412414    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:08.413281    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:08.414948    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:08.409488    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:08.410375    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:08.412414    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:08.413281    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:08.414948    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:08.418789  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:08.418801  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:08.444616  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:08.444654  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:10.972656  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:10.983126  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:10.983206  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:11.011272  291455 cri.go:89] found id: ""
	I1212 01:38:11.011296  291455 logs.go:282] 0 containers: []
	W1212 01:38:11.011305  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:11.011311  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:11.011372  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:11.061173  291455 cri.go:89] found id: ""
	I1212 01:38:11.061199  291455 logs.go:282] 0 containers: []
	W1212 01:38:11.061208  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:11.061214  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:11.061273  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:11.124035  291455 cri.go:89] found id: ""
	I1212 01:38:11.124061  291455 logs.go:282] 0 containers: []
	W1212 01:38:11.124070  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:11.124077  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:11.124144  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:11.152861  291455 cri.go:89] found id: ""
	I1212 01:38:11.152900  291455 logs.go:282] 0 containers: []
	W1212 01:38:11.152910  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:11.152932  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:11.153005  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:11.178248  291455 cri.go:89] found id: ""
	I1212 01:38:11.178270  291455 logs.go:282] 0 containers: []
	W1212 01:38:11.178279  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:11.178285  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:11.178355  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:11.213235  291455 cri.go:89] found id: ""
	I1212 01:38:11.213260  291455 logs.go:282] 0 containers: []
	W1212 01:38:11.213269  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:11.213275  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:11.213337  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:11.238933  291455 cri.go:89] found id: ""
	I1212 01:38:11.238960  291455 logs.go:282] 0 containers: []
	W1212 01:38:11.238969  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:11.238975  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:11.239060  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:11.264115  291455 cri.go:89] found id: ""
	I1212 01:38:11.264137  291455 logs.go:282] 0 containers: []
	W1212 01:38:11.264146  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:11.264155  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:11.264167  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:11.320523  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:11.320561  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:11.334027  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:11.334059  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:12.036672  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:14.536663  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:11.411780  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:11.403056    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:11.403575    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:11.405319    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:11.405839    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:11.407505    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:11.403056    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:11.403575    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:11.405319    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:11.405839    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:11.407505    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:11.411803  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:11.411815  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:11.437459  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:11.437498  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:13.966371  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:13.976737  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:13.976807  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:14.002889  291455 cri.go:89] found id: ""
	I1212 01:38:14.002926  291455 logs.go:282] 0 containers: []
	W1212 01:38:14.002936  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:14.002943  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:14.003051  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:14.028607  291455 cri.go:89] found id: ""
	I1212 01:38:14.028632  291455 logs.go:282] 0 containers: []
	W1212 01:38:14.028640  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:14.028647  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:14.028707  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:14.068137  291455 cri.go:89] found id: ""
	I1212 01:38:14.068159  291455 logs.go:282] 0 containers: []
	W1212 01:38:14.068168  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:14.068174  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:14.068236  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:14.114047  291455 cri.go:89] found id: ""
	I1212 01:38:14.114068  291455 logs.go:282] 0 containers: []
	W1212 01:38:14.114077  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:14.114083  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:14.114142  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:14.143724  291455 cri.go:89] found id: ""
	I1212 01:38:14.143751  291455 logs.go:282] 0 containers: []
	W1212 01:38:14.143760  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:14.143766  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:14.143837  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:14.172821  291455 cri.go:89] found id: ""
	I1212 01:38:14.172844  291455 logs.go:282] 0 containers: []
	W1212 01:38:14.172853  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:14.172860  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:14.172922  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:14.201404  291455 cri.go:89] found id: ""
	I1212 01:38:14.201428  291455 logs.go:282] 0 containers: []
	W1212 01:38:14.201437  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:14.201443  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:14.201502  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:14.225421  291455 cri.go:89] found id: ""
	I1212 01:38:14.225445  291455 logs.go:282] 0 containers: []
	W1212 01:38:14.225454  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:14.225464  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:14.225475  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:14.281620  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:14.281655  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:14.295270  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:14.295297  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:14.361558  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:14.353174    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:14.353959    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:14.355541    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:14.356054    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:14.357617    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:14.353174    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:14.353959    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:14.355541    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:14.356054    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:14.357617    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:14.361580  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:14.361594  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:14.387622  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:14.387657  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:38:17.036493  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:19.535924  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:38:16.917930  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:16.928677  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:16.928747  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:16.956782  291455 cri.go:89] found id: ""
	I1212 01:38:16.956805  291455 logs.go:282] 0 containers: []
	W1212 01:38:16.956815  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:16.956821  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:16.956882  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:16.982223  291455 cri.go:89] found id: ""
	I1212 01:38:16.982255  291455 logs.go:282] 0 containers: []
	W1212 01:38:16.982264  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:16.982270  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:16.982337  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:17.011072  291455 cri.go:89] found id: ""
	I1212 01:38:17.011097  291455 logs.go:282] 0 containers: []
	W1212 01:38:17.011107  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:17.011114  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:17.011191  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:17.052070  291455 cri.go:89] found id: ""
	I1212 01:38:17.052096  291455 logs.go:282] 0 containers: []
	W1212 01:38:17.052104  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:17.052110  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:17.052177  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:17.084107  291455 cri.go:89] found id: ""
	I1212 01:38:17.084141  291455 logs.go:282] 0 containers: []
	W1212 01:38:17.084151  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:17.084157  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:17.084224  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:17.122692  291455 cri.go:89] found id: ""
	I1212 01:38:17.122766  291455 logs.go:282] 0 containers: []
	W1212 01:38:17.122797  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:17.122817  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:17.122923  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:17.156006  291455 cri.go:89] found id: ""
	I1212 01:38:17.156081  291455 logs.go:282] 0 containers: []
	W1212 01:38:17.156109  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:17.156129  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:17.156241  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:17.182169  291455 cri.go:89] found id: ""
	I1212 01:38:17.182240  291455 logs.go:282] 0 containers: []
	W1212 01:38:17.182264  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:17.182285  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:17.182335  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:17.237895  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:17.237933  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:17.252584  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:17.252654  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:17.321480  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:17.312815    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:17.313531    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:17.315204    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:17.315765    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:17.317270    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:17.312815    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:17.313531    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:17.315204    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:17.315765    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:17.317270    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:17.321502  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:17.321515  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:17.347596  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:17.347629  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:19.879967  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:19.890396  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:19.890464  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:19.918925  291455 cri.go:89] found id: ""
	I1212 01:38:19.918949  291455 logs.go:282] 0 containers: []
	W1212 01:38:19.918958  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:19.918964  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:19.919053  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:19.943584  291455 cri.go:89] found id: ""
	I1212 01:38:19.943610  291455 logs.go:282] 0 containers: []
	W1212 01:38:19.943619  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:19.943626  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:19.943681  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:19.969048  291455 cri.go:89] found id: ""
	I1212 01:38:19.969068  291455 logs.go:282] 0 containers: []
	W1212 01:38:19.969077  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:19.969083  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:19.969144  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:20.003773  291455 cri.go:89] found id: ""
	I1212 01:38:20.003795  291455 logs.go:282] 0 containers: []
	W1212 01:38:20.003804  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:20.003821  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:20.003894  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:20.066569  291455 cri.go:89] found id: ""
	I1212 01:38:20.066593  291455 logs.go:282] 0 containers: []
	W1212 01:38:20.066602  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:20.066608  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:20.066672  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:20.123787  291455 cri.go:89] found id: ""
	I1212 01:38:20.123818  291455 logs.go:282] 0 containers: []
	W1212 01:38:20.123828  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:20.123835  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:20.123902  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:20.148942  291455 cri.go:89] found id: ""
	I1212 01:38:20.148967  291455 logs.go:282] 0 containers: []
	W1212 01:38:20.148976  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:20.148982  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:20.149040  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:20.174974  291455 cri.go:89] found id: ""
	I1212 01:38:20.175019  291455 logs.go:282] 0 containers: []
	W1212 01:38:20.175028  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:20.175037  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:20.175049  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:20.188705  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:20.188734  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:20.257975  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:20.247998    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:20.248900    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:20.250615    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:20.251381    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:20.253188    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:20.247998    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:20.248900    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:20.250615    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:20.251381    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:20.253188    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:20.258004  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:20.258018  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:20.283558  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:20.283589  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:20.313552  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:20.313580  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1212 01:38:21.535995  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:23.536531  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:38:22.869782  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:22.880016  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:22.880091  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:22.903866  291455 cri.go:89] found id: ""
	I1212 01:38:22.903891  291455 logs.go:282] 0 containers: []
	W1212 01:38:22.903901  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:22.903908  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:22.903971  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:22.927721  291455 cri.go:89] found id: ""
	I1212 01:38:22.927744  291455 logs.go:282] 0 containers: []
	W1212 01:38:22.927752  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:22.927759  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:22.927816  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:22.952423  291455 cri.go:89] found id: ""
	I1212 01:38:22.952447  291455 logs.go:282] 0 containers: []
	W1212 01:38:22.952455  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:22.952461  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:22.952517  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:22.976598  291455 cri.go:89] found id: ""
	I1212 01:38:22.976620  291455 logs.go:282] 0 containers: []
	W1212 01:38:22.976628  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:22.976634  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:22.976691  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:23.003885  291455 cri.go:89] found id: ""
	I1212 01:38:23.003919  291455 logs.go:282] 0 containers: []
	W1212 01:38:23.003939  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:23.003947  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:23.004046  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:23.033013  291455 cri.go:89] found id: ""
	I1212 01:38:23.033036  291455 logs.go:282] 0 containers: []
	W1212 01:38:23.033045  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:23.033052  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:23.033112  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:23.092706  291455 cri.go:89] found id: ""
	I1212 01:38:23.092730  291455 logs.go:282] 0 containers: []
	W1212 01:38:23.092739  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:23.092745  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:23.092802  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:23.133640  291455 cri.go:89] found id: ""
	I1212 01:38:23.133668  291455 logs.go:282] 0 containers: []
	W1212 01:38:23.133676  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:23.133686  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:23.133697  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:23.196413  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:23.196452  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:23.209608  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:23.209634  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:23.275524  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:23.267738    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:23.268351    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:23.269907    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:23.270261    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:23.271739    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:23.267738    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:23.268351    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:23.269907    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:23.270261    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:23.271739    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:23.275547  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:23.275559  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:23.300618  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:23.300651  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:25.829093  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:25.839308  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:25.839392  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:25.862901  291455 cri.go:89] found id: ""
	I1212 01:38:25.862927  291455 logs.go:282] 0 containers: []
	W1212 01:38:25.862936  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:25.862942  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:25.863050  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:25.886878  291455 cri.go:89] found id: ""
	I1212 01:38:25.886912  291455 logs.go:282] 0 containers: []
	W1212 01:38:25.886921  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:25.886927  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:25.887012  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:25.912760  291455 cri.go:89] found id: ""
	I1212 01:38:25.912782  291455 logs.go:282] 0 containers: []
	W1212 01:38:25.912791  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:25.912799  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:25.912867  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:25.937385  291455 cri.go:89] found id: ""
	I1212 01:38:25.937409  291455 logs.go:282] 0 containers: []
	W1212 01:38:25.937418  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:25.937424  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:25.937482  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:25.961635  291455 cri.go:89] found id: ""
	I1212 01:38:25.961659  291455 logs.go:282] 0 containers: []
	W1212 01:38:25.961668  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:25.961674  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:25.961736  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:25.984780  291455 cri.go:89] found id: ""
	I1212 01:38:25.984804  291455 logs.go:282] 0 containers: []
	W1212 01:38:25.984814  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:25.984821  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:25.984886  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:26.013891  291455 cri.go:89] found id: ""
	I1212 01:38:26.013918  291455 logs.go:282] 0 containers: []
	W1212 01:38:26.013927  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:26.013933  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:26.013995  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:26.058178  291455 cri.go:89] found id: ""
	I1212 01:38:26.058203  291455 logs.go:282] 0 containers: []
	W1212 01:38:26.058212  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:26.058222  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:26.058233  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:26.145226  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:26.145265  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:26.159401  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:26.159430  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:26.224696  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:26.216061    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:26.217085    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:26.217937    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:26.219401    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:26.219913    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:26.216061    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:26.217085    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:26.217937    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:26.219401    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:26.219913    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:26.224716  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:26.224727  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:26.249818  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:26.249853  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:38:25.536763  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:28.036701  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:30.036797  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:38:28.780686  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:28.791844  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:28.791927  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:28.820089  291455 cri.go:89] found id: ""
	I1212 01:38:28.820114  291455 logs.go:282] 0 containers: []
	W1212 01:38:28.820123  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:28.820129  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:28.820187  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:28.844073  291455 cri.go:89] found id: ""
	I1212 01:38:28.844097  291455 logs.go:282] 0 containers: []
	W1212 01:38:28.844106  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:28.844115  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:28.844173  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:28.874510  291455 cri.go:89] found id: ""
	I1212 01:38:28.874535  291455 logs.go:282] 0 containers: []
	W1212 01:38:28.874544  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:28.874550  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:28.874609  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:28.899593  291455 cri.go:89] found id: ""
	I1212 01:38:28.899667  291455 logs.go:282] 0 containers: []
	W1212 01:38:28.899683  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:28.899691  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:28.899749  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:28.923958  291455 cri.go:89] found id: ""
	I1212 01:38:28.923981  291455 logs.go:282] 0 containers: []
	W1212 01:38:28.923990  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:28.923996  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:28.924058  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:28.949188  291455 cri.go:89] found id: ""
	I1212 01:38:28.949217  291455 logs.go:282] 0 containers: []
	W1212 01:38:28.949225  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:28.949231  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:28.949307  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:28.974943  291455 cri.go:89] found id: ""
	I1212 01:38:28.974968  291455 logs.go:282] 0 containers: []
	W1212 01:38:28.974976  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:28.974982  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:28.975062  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:29.004380  291455 cri.go:89] found id: ""
	I1212 01:38:29.004475  291455 logs.go:282] 0 containers: []
	W1212 01:38:29.004501  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:29.004542  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:29.004572  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:29.021785  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:29.021856  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:29.143333  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:29.134378    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:29.134910    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:29.137306    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:29.137843    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:29.139511    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:29.134378    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:29.134910    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:29.137306    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:29.137843    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:29.139511    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:29.143354  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:29.143366  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:29.168668  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:29.168699  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:29.197133  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:29.197159  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1212 01:38:32.536552  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:35.039253  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:38:31.753888  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:31.765059  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:31.765150  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:31.790319  291455 cri.go:89] found id: ""
	I1212 01:38:31.790342  291455 logs.go:282] 0 containers: []
	W1212 01:38:31.790350  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:31.790357  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:31.790415  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:31.815400  291455 cri.go:89] found id: ""
	I1212 01:38:31.815424  291455 logs.go:282] 0 containers: []
	W1212 01:38:31.815434  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:31.815441  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:31.815502  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:31.840194  291455 cri.go:89] found id: ""
	I1212 01:38:31.840217  291455 logs.go:282] 0 containers: []
	W1212 01:38:31.840226  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:31.840231  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:31.840291  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:31.867911  291455 cri.go:89] found id: ""
	I1212 01:38:31.867935  291455 logs.go:282] 0 containers: []
	W1212 01:38:31.867943  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:31.867949  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:31.868008  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:31.892198  291455 cri.go:89] found id: ""
	I1212 01:38:31.892222  291455 logs.go:282] 0 containers: []
	W1212 01:38:31.892230  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:31.892238  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:31.892296  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:31.916890  291455 cri.go:89] found id: ""
	I1212 01:38:31.916914  291455 logs.go:282] 0 containers: []
	W1212 01:38:31.916923  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:31.916929  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:31.916988  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:31.942060  291455 cri.go:89] found id: ""
	I1212 01:38:31.942085  291455 logs.go:282] 0 containers: []
	W1212 01:38:31.942095  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:31.942102  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:31.942160  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:31.968817  291455 cri.go:89] found id: ""
	I1212 01:38:31.968839  291455 logs.go:282] 0 containers: []
	W1212 01:38:31.968848  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:31.968857  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:31.968871  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:31.997201  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:31.997227  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:32.062907  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:32.062945  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:32.079848  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:32.079874  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:32.172399  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:32.162924    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:32.163521    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:32.165105    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:32.165573    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:32.167197    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:32.162924    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:32.163521    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:32.165105    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:32.165573    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:32.167197    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:32.172421  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:32.172433  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:34.699204  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:34.710589  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:34.710660  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:34.734740  291455 cri.go:89] found id: ""
	I1212 01:38:34.734767  291455 logs.go:282] 0 containers: []
	W1212 01:38:34.734776  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:34.734782  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:34.734841  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:34.759636  291455 cri.go:89] found id: ""
	I1212 01:38:34.759659  291455 logs.go:282] 0 containers: []
	W1212 01:38:34.759667  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:34.759679  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:34.759739  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:34.785220  291455 cri.go:89] found id: ""
	I1212 01:38:34.785255  291455 logs.go:282] 0 containers: []
	W1212 01:38:34.785265  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:34.785271  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:34.785341  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:34.814480  291455 cri.go:89] found id: ""
	I1212 01:38:34.814502  291455 logs.go:282] 0 containers: []
	W1212 01:38:34.814510  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:34.814516  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:34.814580  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:34.840740  291455 cri.go:89] found id: ""
	I1212 01:38:34.840774  291455 logs.go:282] 0 containers: []
	W1212 01:38:34.840784  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:34.840790  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:34.840872  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:34.868875  291455 cri.go:89] found id: ""
	I1212 01:38:34.868898  291455 logs.go:282] 0 containers: []
	W1212 01:38:34.868907  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:34.868913  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:34.868973  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:34.897841  291455 cri.go:89] found id: ""
	I1212 01:38:34.897864  291455 logs.go:282] 0 containers: []
	W1212 01:38:34.897873  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:34.897879  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:34.897937  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:34.921846  291455 cri.go:89] found id: ""
	I1212 01:38:34.921869  291455 logs.go:282] 0 containers: []
	W1212 01:38:34.921877  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:34.921886  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:34.921897  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:34.935038  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:34.935066  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:35.007684  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:34.997327    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:34.997746    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:34.999039    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:34.999714    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:35.001615    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:34.997327    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:34.997746    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:34.999039    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:34.999714    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:35.001615    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:35.007755  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:35.007775  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:35.034750  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:35.034794  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:35.089747  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:35.089777  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1212 01:38:37.536673  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:39.543660  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:38:37.657148  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:37.668842  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:37.668917  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:37.696665  291455 cri.go:89] found id: ""
	I1212 01:38:37.696699  291455 logs.go:282] 0 containers: []
	W1212 01:38:37.696708  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:37.696720  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:37.696777  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:37.728956  291455 cri.go:89] found id: ""
	I1212 01:38:37.728979  291455 logs.go:282] 0 containers: []
	W1212 01:38:37.728987  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:37.728993  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:37.729058  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:37.753296  291455 cri.go:89] found id: ""
	I1212 01:38:37.753324  291455 logs.go:282] 0 containers: []
	W1212 01:38:37.753334  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:37.753340  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:37.753397  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:37.778445  291455 cri.go:89] found id: ""
	I1212 01:38:37.778471  291455 logs.go:282] 0 containers: []
	W1212 01:38:37.778481  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:37.778490  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:37.778548  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:37.807550  291455 cri.go:89] found id: ""
	I1212 01:38:37.807572  291455 logs.go:282] 0 containers: []
	W1212 01:38:37.807580  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:37.807587  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:37.807649  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:37.832292  291455 cri.go:89] found id: ""
	I1212 01:38:37.832315  291455 logs.go:282] 0 containers: []
	W1212 01:38:37.832323  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:37.832329  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:37.832386  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:37.856566  291455 cri.go:89] found id: ""
	I1212 01:38:37.856588  291455 logs.go:282] 0 containers: []
	W1212 01:38:37.856597  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:37.856602  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:37.856660  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:37.880677  291455 cri.go:89] found id: ""
	I1212 01:38:37.880741  291455 logs.go:282] 0 containers: []
	W1212 01:38:37.880766  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:37.880789  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:37.880820  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:37.910870  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:37.910908  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:37.938485  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:37.938520  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:37.993961  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:37.993995  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:38.010371  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:38.010404  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:38.096529  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:38.085475    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:38.086344    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:38.088104    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:38.088451    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:38.092325    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:38.085475    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:38.086344    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:38.088104    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:38.088451    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:38.092325    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:40.598418  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:40.609775  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:40.609847  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:40.635651  291455 cri.go:89] found id: ""
	I1212 01:38:40.635677  291455 logs.go:282] 0 containers: []
	W1212 01:38:40.635686  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:40.635693  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:40.635757  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:40.660863  291455 cri.go:89] found id: ""
	I1212 01:38:40.660889  291455 logs.go:282] 0 containers: []
	W1212 01:38:40.660898  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:40.660905  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:40.660966  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:40.685941  291455 cri.go:89] found id: ""
	I1212 01:38:40.686012  291455 logs.go:282] 0 containers: []
	W1212 01:38:40.686053  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:40.686078  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:40.686166  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:40.711525  291455 cri.go:89] found id: ""
	I1212 01:38:40.711554  291455 logs.go:282] 0 containers: []
	W1212 01:38:40.711563  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:40.711569  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:40.711630  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:40.737721  291455 cri.go:89] found id: ""
	I1212 01:38:40.737795  291455 logs.go:282] 0 containers: []
	W1212 01:38:40.737816  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:40.737836  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:40.737927  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:40.761337  291455 cri.go:89] found id: ""
	I1212 01:38:40.761402  291455 logs.go:282] 0 containers: []
	W1212 01:38:40.761424  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:40.761442  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:40.761525  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:40.786163  291455 cri.go:89] found id: ""
	I1212 01:38:40.786239  291455 logs.go:282] 0 containers: []
	W1212 01:38:40.786264  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:40.786285  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:40.786412  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:40.810546  291455 cri.go:89] found id: ""
	I1212 01:38:40.810610  291455 logs.go:282] 0 containers: []
	W1212 01:38:40.810634  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:40.810655  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:40.810694  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:40.866283  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:40.866320  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:40.879799  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:40.879834  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:40.945902  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:40.937611    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:40.938411    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:40.939975    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:40.940544    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:40.942091    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:40.937611    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:40.938411    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:40.939975    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:40.940544    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:40.942091    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:40.945925  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:40.945938  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:40.971267  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:40.971302  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:38:42.036561  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:44.536569  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:38:43.502022  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:43.513782  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:43.513855  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:43.538026  291455 cri.go:89] found id: ""
	I1212 01:38:43.538047  291455 logs.go:282] 0 containers: []
	W1212 01:38:43.538055  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:43.538060  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:43.538117  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:43.562296  291455 cri.go:89] found id: ""
	I1212 01:38:43.562320  291455 logs.go:282] 0 containers: []
	W1212 01:38:43.562329  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:43.562335  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:43.562399  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:43.585964  291455 cri.go:89] found id: ""
	I1212 01:38:43.585986  291455 logs.go:282] 0 containers: []
	W1212 01:38:43.585995  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:43.586001  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:43.586056  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:43.609636  291455 cri.go:89] found id: ""
	I1212 01:38:43.609658  291455 logs.go:282] 0 containers: []
	W1212 01:38:43.609666  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:43.609672  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:43.609729  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:43.634822  291455 cri.go:89] found id: ""
	I1212 01:38:43.634843  291455 logs.go:282] 0 containers: []
	W1212 01:38:43.634852  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:43.634857  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:43.634916  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:43.659517  291455 cri.go:89] found id: ""
	I1212 01:38:43.659539  291455 logs.go:282] 0 containers: []
	W1212 01:38:43.659553  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:43.659560  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:43.659619  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:43.684416  291455 cri.go:89] found id: ""
	I1212 01:38:43.684471  291455 logs.go:282] 0 containers: []
	W1212 01:38:43.684486  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:43.684493  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:43.684557  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:43.708909  291455 cri.go:89] found id: ""
	I1212 01:38:43.708931  291455 logs.go:282] 0 containers: []
	W1212 01:38:43.708939  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:43.708949  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:43.708961  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:43.764034  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:43.764069  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:43.778276  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:43.778304  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:43.849112  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:43.839330    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:43.839703    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:43.842808    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:43.843485    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:43.845319    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:43.839330    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:43.839703    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:43.842808    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:43.843485    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:43.845319    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:43.849132  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:43.849144  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:43.874790  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:43.874823  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:38:47.036537  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:49.536417  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:38:46.404666  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:46.415686  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:46.415772  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:46.446409  291455 cri.go:89] found id: ""
	I1212 01:38:46.446436  291455 logs.go:282] 0 containers: []
	W1212 01:38:46.446445  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:46.446452  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:46.446517  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:46.481137  291455 cri.go:89] found id: ""
	I1212 01:38:46.481160  291455 logs.go:282] 0 containers: []
	W1212 01:38:46.481169  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:46.481175  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:46.481258  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:46.506866  291455 cri.go:89] found id: ""
	I1212 01:38:46.506892  291455 logs.go:282] 0 containers: []
	W1212 01:38:46.506902  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:46.506908  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:46.506964  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:46.535109  291455 cri.go:89] found id: ""
	I1212 01:38:46.535185  291455 logs.go:282] 0 containers: []
	W1212 01:38:46.535208  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:46.535228  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:46.535312  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:46.559379  291455 cri.go:89] found id: ""
	I1212 01:38:46.559402  291455 logs.go:282] 0 containers: []
	W1212 01:38:46.559410  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:46.559417  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:46.559478  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:46.583642  291455 cri.go:89] found id: ""
	I1212 01:38:46.583717  291455 logs.go:282] 0 containers: []
	W1212 01:38:46.583738  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:46.583758  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:46.583842  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:46.608474  291455 cri.go:89] found id: ""
	I1212 01:38:46.608541  291455 logs.go:282] 0 containers: []
	W1212 01:38:46.608563  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:46.608578  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:46.608652  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:46.632905  291455 cri.go:89] found id: ""
	I1212 01:38:46.632982  291455 logs.go:282] 0 containers: []
	W1212 01:38:46.632997  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:46.633007  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:46.633018  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:46.689011  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:46.689048  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:46.702565  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:46.702592  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:46.772610  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:46.763145    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:46.764149    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:46.764820    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:46.766385    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:46.766678    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:46.763145    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:46.764149    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:46.764820    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:46.766385    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:46.766678    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:46.772629  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:46.772643  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:46.797690  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:46.797725  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:49.328051  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:49.341287  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:49.341360  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:49.378113  291455 cri.go:89] found id: ""
	I1212 01:38:49.378135  291455 logs.go:282] 0 containers: []
	W1212 01:38:49.378143  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:49.378149  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:49.378210  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:49.404269  291455 cri.go:89] found id: ""
	I1212 01:38:49.404291  291455 logs.go:282] 0 containers: []
	W1212 01:38:49.404300  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:49.404306  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:49.404364  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:49.428783  291455 cri.go:89] found id: ""
	I1212 01:38:49.428809  291455 logs.go:282] 0 containers: []
	W1212 01:38:49.428819  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:49.428825  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:49.428884  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:49.453856  291455 cri.go:89] found id: ""
	I1212 01:38:49.453889  291455 logs.go:282] 0 containers: []
	W1212 01:38:49.453898  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:49.453905  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:49.453965  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:49.480403  291455 cri.go:89] found id: ""
	I1212 01:38:49.480428  291455 logs.go:282] 0 containers: []
	W1212 01:38:49.480439  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:49.480445  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:49.480502  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:49.505527  291455 cri.go:89] found id: ""
	I1212 01:38:49.505594  291455 logs.go:282] 0 containers: []
	W1212 01:38:49.505617  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:49.505644  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:49.505740  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:49.529450  291455 cri.go:89] found id: ""
	I1212 01:38:49.529474  291455 logs.go:282] 0 containers: []
	W1212 01:38:49.529483  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:49.529489  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:49.529546  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:49.554349  291455 cri.go:89] found id: ""
	I1212 01:38:49.554412  291455 logs.go:282] 0 containers: []
	W1212 01:38:49.554435  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:49.554465  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:49.554493  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:49.611773  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:49.611805  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:49.625145  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:49.625169  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:49.689186  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:49.680639    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:49.681463    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:49.683157    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:49.683640    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:49.685287    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:49.680639    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:49.681463    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:49.683157    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:49.683640    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:49.685287    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:49.689208  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:49.689220  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:49.715241  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:49.715275  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:38:51.536523  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:53.536619  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:38:52.245578  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:52.255964  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:52.256032  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:52.288234  291455 cri.go:89] found id: ""
	I1212 01:38:52.288273  291455 logs.go:282] 0 containers: []
	W1212 01:38:52.288281  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:52.288287  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:52.288362  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:52.361726  291455 cri.go:89] found id: ""
	I1212 01:38:52.361756  291455 logs.go:282] 0 containers: []
	W1212 01:38:52.361765  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:52.361772  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:52.361848  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:52.390222  291455 cri.go:89] found id: ""
	I1212 01:38:52.390248  291455 logs.go:282] 0 containers: []
	W1212 01:38:52.390257  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:52.390262  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:52.390320  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:52.415677  291455 cri.go:89] found id: ""
	I1212 01:38:52.415712  291455 logs.go:282] 0 containers: []
	W1212 01:38:52.415721  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:52.415728  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:52.415796  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:52.440412  291455 cri.go:89] found id: ""
	I1212 01:38:52.440435  291455 logs.go:282] 0 containers: []
	W1212 01:38:52.440444  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:52.440450  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:52.440508  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:52.464172  291455 cri.go:89] found id: ""
	I1212 01:38:52.464203  291455 logs.go:282] 0 containers: []
	W1212 01:38:52.464212  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:52.464219  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:52.464278  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:52.496050  291455 cri.go:89] found id: ""
	I1212 01:38:52.496075  291455 logs.go:282] 0 containers: []
	W1212 01:38:52.496083  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:52.496089  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:52.496147  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:52.525249  291455 cri.go:89] found id: ""
	I1212 01:38:52.525271  291455 logs.go:282] 0 containers: []
	W1212 01:38:52.525279  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:52.525288  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:52.525299  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:52.580198  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:52.580233  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:52.593582  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:52.593648  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:52.659167  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:52.650803    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:52.651520    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:52.653182    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:52.653702    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:52.655438    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:52.650803    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:52.651520    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:52.653182    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:52.653702    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:52.655438    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:52.659187  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:52.659199  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:52.685268  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:52.685300  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:55.219025  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:55.229148  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:55.229222  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:55.252977  291455 cri.go:89] found id: ""
	I1212 01:38:55.253051  291455 logs.go:282] 0 containers: []
	W1212 01:38:55.253066  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:55.253077  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:55.253140  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:55.276881  291455 cri.go:89] found id: ""
	I1212 01:38:55.276945  291455 logs.go:282] 0 containers: []
	W1212 01:38:55.276959  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:55.276966  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:55.277024  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:55.316321  291455 cri.go:89] found id: ""
	I1212 01:38:55.316355  291455 logs.go:282] 0 containers: []
	W1212 01:38:55.316364  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:55.316370  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:55.316447  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:55.355675  291455 cri.go:89] found id: ""
	I1212 01:38:55.355703  291455 logs.go:282] 0 containers: []
	W1212 01:38:55.355711  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:55.355717  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:55.355791  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:55.394580  291455 cri.go:89] found id: ""
	I1212 01:38:55.394607  291455 logs.go:282] 0 containers: []
	W1212 01:38:55.394615  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:55.394621  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:55.394693  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:55.423340  291455 cri.go:89] found id: ""
	I1212 01:38:55.423363  291455 logs.go:282] 0 containers: []
	W1212 01:38:55.423371  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:55.423378  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:55.423436  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:55.447512  291455 cri.go:89] found id: ""
	I1212 01:38:55.447536  291455 logs.go:282] 0 containers: []
	W1212 01:38:55.447544  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:55.447550  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:55.447610  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:55.470830  291455 cri.go:89] found id: ""
	I1212 01:38:55.470853  291455 logs.go:282] 0 containers: []
	W1212 01:38:55.470867  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:55.470876  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:55.470886  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:55.528525  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:55.528561  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:55.541815  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:55.541843  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:55.605253  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:55.596889    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:55.597592    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:55.599233    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:55.599799    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:55.601358    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:55.596889    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:55.597592    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:55.599233    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:55.599799    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:55.601358    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:55.605280  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:55.605292  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:55.631237  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:55.631267  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:38:55.536688  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:58.036700  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:38:58.158753  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:58.169462  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:58.169546  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:58.194075  291455 cri.go:89] found id: ""
	I1212 01:38:58.194096  291455 logs.go:282] 0 containers: []
	W1212 01:38:58.194105  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:58.194111  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:58.194171  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:58.218468  291455 cri.go:89] found id: ""
	I1212 01:38:58.218546  291455 logs.go:282] 0 containers: []
	W1212 01:38:58.218569  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:58.218590  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:58.218675  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:58.242950  291455 cri.go:89] found id: ""
	I1212 01:38:58.242973  291455 logs.go:282] 0 containers: []
	W1212 01:38:58.242981  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:58.242987  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:58.243142  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:58.269403  291455 cri.go:89] found id: ""
	I1212 01:38:58.269423  291455 logs.go:282] 0 containers: []
	W1212 01:38:58.269432  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:58.269439  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:58.269502  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:58.317022  291455 cri.go:89] found id: ""
	I1212 01:38:58.317044  291455 logs.go:282] 0 containers: []
	W1212 01:38:58.317054  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:58.317059  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:58.317117  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:58.373414  291455 cri.go:89] found id: ""
	I1212 01:38:58.373486  291455 logs.go:282] 0 containers: []
	W1212 01:38:58.373511  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:58.373531  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:58.373619  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:58.404516  291455 cri.go:89] found id: ""
	I1212 01:38:58.404583  291455 logs.go:282] 0 containers: []
	W1212 01:38:58.404597  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:58.404604  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:58.404663  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:58.433096  291455 cri.go:89] found id: ""
	I1212 01:38:58.433120  291455 logs.go:282] 0 containers: []
	W1212 01:38:58.433131  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:58.433141  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:58.433170  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:58.495200  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:58.486845    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:58.487734    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:58.489310    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:58.489623    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:58.491296    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:58.486845    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:58.487734    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:58.489310    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:58.489623    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:58.491296    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:58.495223  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:58.495237  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:58.520595  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:58.520626  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:58.547636  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:58.547664  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:58.603945  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:58.603979  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:01.119071  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:01.130124  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:01.130196  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:01.155700  291455 cri.go:89] found id: ""
	I1212 01:39:01.155725  291455 logs.go:282] 0 containers: []
	W1212 01:39:01.155733  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:01.155740  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:01.155799  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:01.183985  291455 cri.go:89] found id: ""
	I1212 01:39:01.184012  291455 logs.go:282] 0 containers: []
	W1212 01:39:01.184021  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:01.184028  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:01.184095  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:01.211713  291455 cri.go:89] found id: ""
	I1212 01:39:01.211740  291455 logs.go:282] 0 containers: []
	W1212 01:39:01.211749  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:01.211756  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:01.211817  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:01.238159  291455 cri.go:89] found id: ""
	I1212 01:39:01.238185  291455 logs.go:282] 0 containers: []
	W1212 01:39:01.238195  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:01.238201  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:01.238265  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:01.264520  291455 cri.go:89] found id: ""
	I1212 01:39:01.264544  291455 logs.go:282] 0 containers: []
	W1212 01:39:01.264553  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:01.264560  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:01.264618  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:01.320162  291455 cri.go:89] found id: ""
	I1212 01:39:01.320191  291455 logs.go:282] 0 containers: []
	W1212 01:39:01.320200  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:01.320207  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:01.320276  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	W1212 01:39:00.536335  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:39:02.536671  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:39:05.036449  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:39:01.367993  291455 cri.go:89] found id: ""
	I1212 01:39:01.368020  291455 logs.go:282] 0 containers: []
	W1212 01:39:01.368029  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:01.368037  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:01.368107  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:01.395205  291455 cri.go:89] found id: ""
	I1212 01:39:01.395230  291455 logs.go:282] 0 containers: []
	W1212 01:39:01.395239  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:01.395248  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:01.395260  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:01.450970  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:01.451049  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:01.464511  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:01.464540  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:01.529452  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:01.521771    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:01.522386    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:01.523907    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:01.524217    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:01.525703    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:01.521771    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:01.522386    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:01.523907    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:01.524217    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:01.525703    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:01.529472  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:01.529484  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:01.553702  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:01.553734  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:04.082286  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:04.093237  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:04.093313  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:04.118261  291455 cri.go:89] found id: ""
	I1212 01:39:04.118283  291455 logs.go:282] 0 containers: []
	W1212 01:39:04.118292  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:04.118298  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:04.118360  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:04.147714  291455 cri.go:89] found id: ""
	I1212 01:39:04.147736  291455 logs.go:282] 0 containers: []
	W1212 01:39:04.147745  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:04.147751  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:04.147815  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:04.172999  291455 cri.go:89] found id: ""
	I1212 01:39:04.173023  291455 logs.go:282] 0 containers: []
	W1212 01:39:04.173032  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:04.173039  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:04.173101  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:04.197081  291455 cri.go:89] found id: ""
	I1212 01:39:04.197103  291455 logs.go:282] 0 containers: []
	W1212 01:39:04.197111  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:04.197119  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:04.197176  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:04.220639  291455 cri.go:89] found id: ""
	I1212 01:39:04.220665  291455 logs.go:282] 0 containers: []
	W1212 01:39:04.220674  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:04.220681  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:04.220746  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:04.248901  291455 cri.go:89] found id: ""
	I1212 01:39:04.248926  291455 logs.go:282] 0 containers: []
	W1212 01:39:04.248935  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:04.248944  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:04.249011  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:04.274064  291455 cri.go:89] found id: ""
	I1212 01:39:04.274085  291455 logs.go:282] 0 containers: []
	W1212 01:39:04.274093  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:04.274099  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:04.274161  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:04.332510  291455 cri.go:89] found id: ""
	I1212 01:39:04.332535  291455 logs.go:282] 0 containers: []
	W1212 01:39:04.332545  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:04.332555  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:04.332572  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:04.368151  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:04.368189  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:04.403091  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:04.403118  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:04.459000  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:04.459031  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:04.472281  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:04.472306  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:04.534979  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:04.526363    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:04.527054    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:04.528724    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:04.529233    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:04.530692    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:04.526363    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:04.527054    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:04.528724    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:04.529233    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:04.530692    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1212 01:39:07.036549  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:39:09.036731  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:39:07.035447  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:07.046244  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:07.046313  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:07.072737  291455 cri.go:89] found id: ""
	I1212 01:39:07.072761  291455 logs.go:282] 0 containers: []
	W1212 01:39:07.072770  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:07.072776  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:07.072835  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:07.097400  291455 cri.go:89] found id: ""
	I1212 01:39:07.097423  291455 logs.go:282] 0 containers: []
	W1212 01:39:07.097431  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:07.097438  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:07.097496  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:07.121464  291455 cri.go:89] found id: ""
	I1212 01:39:07.121486  291455 logs.go:282] 0 containers: []
	W1212 01:39:07.121495  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:07.121501  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:07.121584  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:07.145780  291455 cri.go:89] found id: ""
	I1212 01:39:07.145800  291455 logs.go:282] 0 containers: []
	W1212 01:39:07.145808  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:07.145814  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:07.145870  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:07.169997  291455 cri.go:89] found id: ""
	I1212 01:39:07.170018  291455 logs.go:282] 0 containers: []
	W1212 01:39:07.170027  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:07.170033  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:07.170091  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:07.195061  291455 cri.go:89] found id: ""
	I1212 01:39:07.195088  291455 logs.go:282] 0 containers: []
	W1212 01:39:07.195096  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:07.195103  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:07.195161  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:07.220294  291455 cri.go:89] found id: ""
	I1212 01:39:07.220317  291455 logs.go:282] 0 containers: []
	W1212 01:39:07.220325  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:07.220331  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:07.220389  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:07.245551  291455 cri.go:89] found id: ""
	I1212 01:39:07.245576  291455 logs.go:282] 0 containers: []
	W1212 01:39:07.245586  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:07.245595  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:07.245607  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:07.277493  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:07.277521  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:07.344946  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:07.347238  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:07.376690  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:07.376714  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:07.447695  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:07.438862    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:07.439591    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:07.441334    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:07.441943    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:07.443673    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:07.438862    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:07.439591    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:07.441334    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:07.441943    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:07.443673    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:07.447717  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:07.447730  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:09.974214  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:09.987839  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:09.987921  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:10.025371  291455 cri.go:89] found id: ""
	I1212 01:39:10.025397  291455 logs.go:282] 0 containers: []
	W1212 01:39:10.025407  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:10.025413  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:10.025477  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:10.051333  291455 cri.go:89] found id: ""
	I1212 01:39:10.051357  291455 logs.go:282] 0 containers: []
	W1212 01:39:10.051366  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:10.051371  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:10.051436  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:10.075263  291455 cri.go:89] found id: ""
	I1212 01:39:10.075289  291455 logs.go:282] 0 containers: []
	W1212 01:39:10.075298  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:10.075305  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:10.075364  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:10.103331  291455 cri.go:89] found id: ""
	I1212 01:39:10.103355  291455 logs.go:282] 0 containers: []
	W1212 01:39:10.103364  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:10.103370  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:10.103431  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:10.128706  291455 cri.go:89] found id: ""
	I1212 01:39:10.128730  291455 logs.go:282] 0 containers: []
	W1212 01:39:10.128739  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:10.128746  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:10.128802  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:10.154605  291455 cri.go:89] found id: ""
	I1212 01:39:10.154627  291455 logs.go:282] 0 containers: []
	W1212 01:39:10.154637  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:10.154644  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:10.154703  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:10.179767  291455 cri.go:89] found id: ""
	I1212 01:39:10.179791  291455 logs.go:282] 0 containers: []
	W1212 01:39:10.179800  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:10.179806  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:10.179864  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:10.208346  291455 cri.go:89] found id: ""
	I1212 01:39:10.208369  291455 logs.go:282] 0 containers: []
	W1212 01:39:10.208376  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:10.208386  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:10.208397  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:10.263848  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:10.263883  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:10.279969  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:10.279994  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:10.405176  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:10.396616    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:10.397197    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:10.398853    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:10.399595    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:10.401217    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:10.396616    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:10.397197    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:10.398853    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:10.399595    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:10.401217    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:10.405198  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:10.405210  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:10.431360  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:10.431398  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:39:11.536529  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:39:13.536580  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:39:12.959344  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:12.971541  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:12.971628  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:13.006786  291455 cri.go:89] found id: ""
	I1212 01:39:13.006815  291455 logs.go:282] 0 containers: []
	W1212 01:39:13.006824  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:13.006830  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:13.006903  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:13.032106  291455 cri.go:89] found id: ""
	I1212 01:39:13.032127  291455 logs.go:282] 0 containers: []
	W1212 01:39:13.032135  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:13.032141  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:13.032200  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:13.057432  291455 cri.go:89] found id: ""
	I1212 01:39:13.057454  291455 logs.go:282] 0 containers: []
	W1212 01:39:13.057463  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:13.057469  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:13.057529  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:13.082502  291455 cri.go:89] found id: ""
	I1212 01:39:13.082524  291455 logs.go:282] 0 containers: []
	W1212 01:39:13.082532  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:13.082538  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:13.082595  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:13.108199  291455 cri.go:89] found id: ""
	I1212 01:39:13.108272  291455 logs.go:282] 0 containers: []
	W1212 01:39:13.108295  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:13.108323  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:13.108433  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:13.134284  291455 cri.go:89] found id: ""
	I1212 01:39:13.134356  291455 logs.go:282] 0 containers: []
	W1212 01:39:13.134379  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:13.134398  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:13.134485  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:13.159517  291455 cri.go:89] found id: ""
	I1212 01:39:13.159541  291455 logs.go:282] 0 containers: []
	W1212 01:39:13.159550  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:13.159556  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:13.159614  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:13.183175  291455 cri.go:89] found id: ""
	I1212 01:39:13.183199  291455 logs.go:282] 0 containers: []
	W1212 01:39:13.183207  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:13.183216  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:13.183232  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:13.241174  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:13.241210  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:13.254849  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:13.254880  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:13.381552  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:13.373347    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:13.373888    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:13.375400    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:13.375820    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:13.377000    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:13.373347    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:13.373888    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:13.375400    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:13.375820    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:13.377000    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:13.381573  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:13.381586  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:13.406354  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:13.406385  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:15.933099  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:15.943596  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:15.943674  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:15.966960  291455 cri.go:89] found id: ""
	I1212 01:39:15.967014  291455 logs.go:282] 0 containers: []
	W1212 01:39:15.967023  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:15.967030  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:15.967090  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:15.996145  291455 cri.go:89] found id: ""
	I1212 01:39:15.996167  291455 logs.go:282] 0 containers: []
	W1212 01:39:15.996175  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:15.996182  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:15.996239  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:16.025152  291455 cri.go:89] found id: ""
	I1212 01:39:16.025175  291455 logs.go:282] 0 containers: []
	W1212 01:39:16.025183  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:16.025191  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:16.025248  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:16.050231  291455 cri.go:89] found id: ""
	I1212 01:39:16.050264  291455 logs.go:282] 0 containers: []
	W1212 01:39:16.050273  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:16.050279  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:16.050345  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:16.076929  291455 cri.go:89] found id: ""
	I1212 01:39:16.076958  291455 logs.go:282] 0 containers: []
	W1212 01:39:16.076967  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:16.076975  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:16.077054  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:16.102241  291455 cri.go:89] found id: ""
	I1212 01:39:16.102273  291455 logs.go:282] 0 containers: []
	W1212 01:39:16.102282  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:16.102304  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:16.102383  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:16.126239  291455 cri.go:89] found id: ""
	I1212 01:39:16.126302  291455 logs.go:282] 0 containers: []
	W1212 01:39:16.126324  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:16.126344  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:16.126417  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:16.151645  291455 cri.go:89] found id: ""
	I1212 01:39:16.151674  291455 logs.go:282] 0 containers: []
	W1212 01:39:16.151683  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:16.151692  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:16.151702  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:16.176852  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:16.176882  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:16.206720  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:16.206746  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:16.262653  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:16.262686  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:16.275603  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:16.275634  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:16.035987  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:39:18.036847  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:39:18.536213  287206 node_ready.go:38] duration metric: took 6m0.000908955s for node "no-preload-361053" to be "Ready" ...
	I1212 01:39:18.539274  287206 out.go:203] 
	W1212 01:39:18.542145  287206 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1212 01:39:18.542166  287206 out.go:285] * 
	W1212 01:39:18.544311  287206 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 01:39:18.547291  287206 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.451347340Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.451362208Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.451390508Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.451404473Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.451413802Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.451425445Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.451435169Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.451453918Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.451470123Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.451502804Z" level=info msg="Connect containerd service"
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.451753785Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.452300474Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.470473080Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.470539313Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.470573570Z" level=info msg="Start subscribing containerd event"
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.470624376Z" level=info msg="Start recovering state"
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.499034921Z" level=info msg="Start event monitor"
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.499222886Z" level=info msg="Start cni network conf syncer for default"
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.499311773Z" level=info msg="Start streaming server"
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.499396130Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.499649310Z" level=info msg="runtime interface starting up..."
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.499722058Z" level=info msg="starting plugins..."
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.499802846Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 01:33:16 no-preload-361053 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.501821533Z" level=info msg="containerd successfully booted in 0.072171s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:20.268696    3961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:20.269371    3961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:20.271138    3961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:20.271605    3961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:20.273326    3961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec11 23:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014465] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.504479] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.038126] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.726220] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.947343] kauditd_printk_skb: 36 callbacks suppressed
	[Dec12 00:40] hrtimer: interrupt took 11339963 ns
	
	
	==> kernel <==
	 01:39:20 up  2:21,  0 user,  load average: 0.59, 0.74, 1.38
	Linux no-preload-361053 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 01:39:17 no-preload-361053 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:39:17 no-preload-361053 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 481.
	Dec 12 01:39:17 no-preload-361053 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:39:17 no-preload-361053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:39:17 no-preload-361053 kubelet[3838]: E1212 01:39:17.842190    3838 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:39:17 no-preload-361053 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:39:17 no-preload-361053 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:39:18 no-preload-361053 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 482.
	Dec 12 01:39:18 no-preload-361053 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:39:18 no-preload-361053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:39:18 no-preload-361053 kubelet[3844]: E1212 01:39:18.670307    3844 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:39:18 no-preload-361053 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:39:18 no-preload-361053 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:39:19 no-preload-361053 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 483.
	Dec 12 01:39:19 no-preload-361053 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:39:19 no-preload-361053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:39:19 no-preload-361053 kubelet[3865]: E1212 01:39:19.411860    3865 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:39:19 no-preload-361053 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:39:19 no-preload-361053 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:39:20 no-preload-361053 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 484.
	Dec 12 01:39:20 no-preload-361053 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:39:20 no-preload-361053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:39:20 no-preload-361053 kubelet[3923]: E1212 01:39:20.141463    3923 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:39:20 no-preload-361053 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:39:20 no-preload-361053 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-361053 -n no-preload-361053
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-361053 -n no-preload-361053: exit status 2 (328.562182ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-361053" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (371.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (97.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-256959 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1212 01:33:41.615031    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:33:52.051916    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:34:00.122455    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-256959 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m35.564035224s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-256959 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-256959
helpers_test.go:244: (dbg) docker inspect newest-cni-256959:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "361f9c16c44aa15a36ece6d69f387f89ceb140e0cb337e6926bbb4b89286930b",
	        "Created": "2025-12-12T01:25:15.433462291Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 277175,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T01:25:15.494100167Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/361f9c16c44aa15a36ece6d69f387f89ceb140e0cb337e6926bbb4b89286930b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/361f9c16c44aa15a36ece6d69f387f89ceb140e0cb337e6926bbb4b89286930b/hostname",
	        "HostsPath": "/var/lib/docker/containers/361f9c16c44aa15a36ece6d69f387f89ceb140e0cb337e6926bbb4b89286930b/hosts",
	        "LogPath": "/var/lib/docker/containers/361f9c16c44aa15a36ece6d69f387f89ceb140e0cb337e6926bbb4b89286930b/361f9c16c44aa15a36ece6d69f387f89ceb140e0cb337e6926bbb4b89286930b-json.log",
	        "Name": "/newest-cni-256959",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-256959:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-256959",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "361f9c16c44aa15a36ece6d69f387f89ceb140e0cb337e6926bbb4b89286930b",
	                "LowerDir": "/var/lib/docker/overlay2/46aaa938e663ba5fb2a5ebb73f99ff9b7f70d7fe540cd86b216b975394456017-init/diff:/var/lib/docker/overlay2/4c0e5370e4fd7b4e6c6a79620ef377d7d55826709cd277e0cfa49c6005af0314/diff",
	                "MergedDir": "/var/lib/docker/overlay2/46aaa938e663ba5fb2a5ebb73f99ff9b7f70d7fe540cd86b216b975394456017/merged",
	                "UpperDir": "/var/lib/docker/overlay2/46aaa938e663ba5fb2a5ebb73f99ff9b7f70d7fe540cd86b216b975394456017/diff",
	                "WorkDir": "/var/lib/docker/overlay2/46aaa938e663ba5fb2a5ebb73f99ff9b7f70d7fe540cd86b216b975394456017/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-256959",
	                "Source": "/var/lib/docker/volumes/newest-cni-256959/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-256959",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-256959",
	                "name.minikube.sigs.k8s.io": "newest-cni-256959",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0b5fdda8c44db2b08c6f089f74d1eb8e7f3198550ce1c1afce9d13d69b6616c0",
	            "SandboxKey": "/var/run/docker/netns/0b5fdda8c44d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-256959": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:7e:47:09:12:8c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "08d9e23f02a4d7730d420d79f658bc1854aa3d62ee2a54a8cd34a455b2ba0431",
	                    "EndpointID": "cbdc9207c393fe6537a0e89077b0b631c11292137bcf558f1de9aba21fb8c57a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-256959",
	                        "361f9c16c44a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-256959 -n newest-cni-256959
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-256959 -n newest-cni-256959: exit status 6 (352.947939ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 01:35:08.585859  290934 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-256959" does not appear in /home/jenkins/minikube-integration/22101-2343/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-256959 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ delete  │ -p old-k8s-version-147581                                                                                                                                                                                                                                  │ old-k8s-version-147581       │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ delete  │ -p old-k8s-version-147581                                                                                                                                                                                                                                  │ old-k8s-version-147581       │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ start   │ -p embed-certs-648696 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:23 UTC │
	│ image   │ default-k8s-diff-port-971096 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ pause   │ -p default-k8s-diff-port-971096 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ unpause │ -p default-k8s-diff-port-971096 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ delete  │ -p default-k8s-diff-port-971096                                                                                                                                                                                                                            │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ delete  │ -p default-k8s-diff-port-971096                                                                                                                                                                                                                            │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ delete  │ -p disable-driver-mounts-539158                                                                                                                                                                                                                            │ disable-driver-mounts-539158 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ start   │ -p no-preload-361053 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-361053            │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-648696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ stop    │ -p embed-certs-648696 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ addons  │ enable dashboard -p embed-certs-648696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ start   │ -p embed-certs-648696 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:24 UTC │
	│ image   │ embed-certs-648696 image list --format=json                                                                                                                                                                                                                │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ pause   │ -p embed-certs-648696 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ unpause │ -p embed-certs-648696 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ delete  │ -p embed-certs-648696                                                                                                                                                                                                                                      │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ delete  │ -p embed-certs-648696                                                                                                                                                                                                                                      │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ start   │ -p newest-cni-256959 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-256959            │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-361053 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-361053            │ jenkins │ v1.37.0 │ 12 Dec 25 01:31 UTC │                     │
	│ stop    │ -p no-preload-361053 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-361053            │ jenkins │ v1.37.0 │ 12 Dec 25 01:33 UTC │ 12 Dec 25 01:33 UTC │
	│ addons  │ enable dashboard -p no-preload-361053 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-361053            │ jenkins │ v1.37.0 │ 12 Dec 25 01:33 UTC │ 12 Dec 25 01:33 UTC │
	│ start   │ -p no-preload-361053 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-361053            │ jenkins │ v1.37.0 │ 12 Dec 25 01:33 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-256959 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-256959            │ jenkins │ v1.37.0 │ 12 Dec 25 01:33 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 01:33:10
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 01:33:10.429459  287206 out.go:360] Setting OutFile to fd 1 ...
	I1212 01:33:10.429581  287206 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:33:10.429595  287206 out.go:374] Setting ErrFile to fd 2...
	I1212 01:33:10.429600  287206 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:33:10.429856  287206 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	I1212 01:33:10.430230  287206 out.go:368] Setting JSON to false
	I1212 01:33:10.431163  287206 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8137,"bootTime":1765495054,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1212 01:33:10.431230  287206 start.go:143] virtualization:  
	I1212 01:33:10.434281  287206 out.go:179] * [no-preload-361053] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 01:33:10.438251  287206 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 01:33:10.438392  287206 notify.go:221] Checking for updates...
	I1212 01:33:10.444185  287206 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 01:33:10.447214  287206 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 01:33:10.450100  287206 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	I1212 01:33:10.452984  287206 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 01:33:10.455808  287206 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 01:33:10.459169  287206 config.go:182] Loaded profile config "no-preload-361053": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 01:33:10.459786  287206 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 01:33:10.491859  287206 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 01:33:10.491978  287206 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:33:10.546591  287206 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 01:33:10.536325619 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:33:10.546711  287206 docker.go:319] overlay module found
	I1212 01:33:10.549899  287206 out.go:179] * Using the docker driver based on existing profile
	I1212 01:33:10.552847  287206 start.go:309] selected driver: docker
	I1212 01:33:10.552889  287206 start.go:927] validating driver "docker" against &{Name:no-preload-361053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-361053 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:33:10.552995  287206 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 01:33:10.553716  287206 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:33:10.609060  287206 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 01:33:10.599832814 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:33:10.609400  287206 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 01:33:10.609433  287206 cni.go:84] Creating CNI manager for ""
	I1212 01:33:10.609483  287206 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 01:33:10.609530  287206 start.go:353] cluster config:
	{Name:no-preload-361053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-361053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:33:10.614478  287206 out.go:179] * Starting "no-preload-361053" primary control-plane node in "no-preload-361053" cluster
	I1212 01:33:10.617235  287206 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1212 01:33:10.620106  287206 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 01:33:10.622869  287206 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 01:33:10.622947  287206 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 01:33:10.623042  287206 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/config.json ...
	I1212 01:33:10.623355  287206 cache.go:107] acquiring lock: {Name:mk86e2a34ccf063d967d1b885c7693629a6b1892 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:33:10.623437  287206 cache.go:115] /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1212 01:33:10.623451  287206 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 115.784µs
	I1212 01:33:10.623465  287206 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1212 01:33:10.623481  287206 cache.go:107] acquiring lock: {Name:mk5046428d0406b9fe0bac2e28c1f5cc3958499f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:33:10.623518  287206 cache.go:115] /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1212 01:33:10.623527  287206 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 47.795µs
	I1212 01:33:10.623533  287206 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1212 01:33:10.623546  287206 cache.go:107] acquiring lock: {Name:mkc4887793edcc3c6296024b677e69f6ec1f79f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:33:10.623586  287206 cache.go:115] /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1212 01:33:10.623594  287206 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 49.322µs
	I1212 01:33:10.623600  287206 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1212 01:33:10.623610  287206 cache.go:107] acquiring lock: {Name:mkeb49560acf33aa79e308e0b71177927ef617d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:33:10.623642  287206 cache.go:115] /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1212 01:33:10.623650  287206 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 41.412µs
	I1212 01:33:10.623656  287206 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1212 01:33:10.623665  287206 cache.go:107] acquiring lock: {Name:mk2f0a11f2d527d62eb30e98e76f3a359773886b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:33:10.623691  287206 cache.go:115] /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1212 01:33:10.623696  287206 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 31.763µs
	I1212 01:33:10.623707  287206 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1212 01:33:10.623716  287206 cache.go:107] acquiring lock: {Name:mkf75c8f281a4d7578645f330ed9cc6bf48ab550 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:33:10.623747  287206 cache.go:115] /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1212 01:33:10.623755  287206 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 40.37µs
	I1212 01:33:10.623761  287206 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1212 01:33:10.623772  287206 cache.go:107] acquiring lock: {Name:mk1d6384b2d8bd32efb0f4661eaa55ecd74d4b80 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:33:10.623803  287206 cache.go:115] /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1212 01:33:10.623812  287206 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 42.807µs
	I1212 01:33:10.623817  287206 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1212 01:33:10.623321  287206 cache.go:107] acquiring lock: {Name:mk71cce41032f52f0748ef343d21f16410e3a1fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:33:10.623892  287206 cache.go:115] /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1212 01:33:10.623901  287206 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 595.264µs
	I1212 01:33:10.623907  287206 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22101-2343/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1212 01:33:10.623913  287206 cache.go:87] Successfully saved all images to host disk.
	I1212 01:33:10.643214  287206 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 01:33:10.643238  287206 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 01:33:10.643258  287206 cache.go:243] Successfully downloaded all kic artifacts
	I1212 01:33:10.643289  287206 start.go:360] acquireMachinesLock for no-preload-361053: {Name:mk154c67822339b116aad3ea851214e3043755e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:33:10.643359  287206 start.go:364] duration metric: took 48.558µs to acquireMachinesLock for "no-preload-361053"
	I1212 01:33:10.643382  287206 start.go:96] Skipping create...Using existing machine configuration
	I1212 01:33:10.643393  287206 fix.go:54] fixHost starting: 
	I1212 01:33:10.643654  287206 cli_runner.go:164] Run: docker container inspect no-preload-361053 --format={{.State.Status}}
	I1212 01:33:10.661405  287206 fix.go:112] recreateIfNeeded on no-preload-361053: state=Stopped err=<nil>
	W1212 01:33:10.661436  287206 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 01:33:10.664651  287206 out.go:252] * Restarting existing docker container for "no-preload-361053" ...
	I1212 01:33:10.664755  287206 cli_runner.go:164] Run: docker start no-preload-361053
	I1212 01:33:10.948880  287206 cli_runner.go:164] Run: docker container inspect no-preload-361053 --format={{.State.Status}}
	I1212 01:33:10.974106  287206 kic.go:430] container "no-preload-361053" state is running.
	I1212 01:33:10.974585  287206 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-361053
	I1212 01:33:10.995294  287206 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/config.json ...
	I1212 01:33:10.995534  287206 machine.go:94] provisionDockerMachine start ...
	I1212 01:33:10.995608  287206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-361053
	I1212 01:33:11.019191  287206 main.go:143] libmachine: Using SSH client type: native
	I1212 01:33:11.019517  287206 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1212 01:33:11.019526  287206 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 01:33:11.020659  287206 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 01:33:14.170473  287206 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-361053
	
	I1212 01:33:14.170498  287206 ubuntu.go:182] provisioning hostname "no-preload-361053"
	I1212 01:33:14.170559  287206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-361053
	I1212 01:33:14.188567  287206 main.go:143] libmachine: Using SSH client type: native
	I1212 01:33:14.188886  287206 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1212 01:33:14.188903  287206 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-361053 && echo "no-preload-361053" | sudo tee /etc/hostname
	I1212 01:33:14.348144  287206 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-361053
	
	I1212 01:33:14.348281  287206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-361053
	I1212 01:33:14.367391  287206 main.go:143] libmachine: Using SSH client type: native
	I1212 01:33:14.367704  287206 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1212 01:33:14.367719  287206 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-361053' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-361053/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-361053' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:33:14.519558  287206 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:33:14.519628  287206 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-2343/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-2343/.minikube}
	I1212 01:33:14.519686  287206 ubuntu.go:190] setting up certificates
	I1212 01:33:14.519722  287206 provision.go:84] configureAuth start
	I1212 01:33:14.519802  287206 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-361053
	I1212 01:33:14.543680  287206 provision.go:143] copyHostCerts
	I1212 01:33:14.543759  287206 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem, removing ...
	I1212 01:33:14.543768  287206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem
	I1212 01:33:14.543857  287206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem (1082 bytes)
	I1212 01:33:14.543983  287206 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem, removing ...
	I1212 01:33:14.543989  287206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem
	I1212 01:33:14.544018  287206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem (1123 bytes)
	I1212 01:33:14.544096  287206 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem, removing ...
	I1212 01:33:14.544103  287206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem
	I1212 01:33:14.544130  287206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem (1675 bytes)
	I1212 01:33:14.544187  287206 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem org=jenkins.no-preload-361053 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-361053]
	I1212 01:33:14.844647  287206 provision.go:177] copyRemoteCerts
	I1212 01:33:14.844713  287206 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:33:14.844788  287206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-361053
	I1212 01:33:14.862571  287206 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/no-preload-361053/id_rsa Username:docker}
	I1212 01:33:14.966655  287206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 01:33:14.983728  287206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 01:33:15.000842  287206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 01:33:15.029620  287206 provision.go:87] duration metric: took 509.857308ms to configureAuth
	I1212 01:33:15.029672  287206 ubuntu.go:206] setting minikube options for container-runtime
	I1212 01:33:15.029880  287206 config.go:182] Loaded profile config "no-preload-361053": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 01:33:15.029895  287206 machine.go:97] duration metric: took 4.034345397s to provisionDockerMachine
	I1212 01:33:15.029904  287206 start.go:293] postStartSetup for "no-preload-361053" (driver="docker")
	I1212 01:33:15.029919  287206 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:33:15.029980  287206 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:33:15.030125  287206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-361053
	I1212 01:33:15.050338  287206 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/no-preload-361053/id_rsa Username:docker}
	I1212 01:33:15.155159  287206 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:33:15.158821  287206 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 01:33:15.158851  287206 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 01:33:15.158882  287206 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/addons for local assets ...
	I1212 01:33:15.159031  287206 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/files for local assets ...
	I1212 01:33:15.159139  287206 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem -> 42902.pem in /etc/ssl/certs
	I1212 01:33:15.159244  287206 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:33:15.167777  287206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /etc/ssl/certs/42902.pem (1708 bytes)
	I1212 01:33:15.188093  287206 start.go:296] duration metric: took 158.172096ms for postStartSetup
	I1212 01:33:15.188178  287206 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 01:33:15.188223  287206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-361053
	I1212 01:33:15.205702  287206 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/no-preload-361053/id_rsa Username:docker}
	I1212 01:33:15.308942  287206 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 01:33:15.313983  287206 fix.go:56] duration metric: took 4.670584581s for fixHost
	I1212 01:33:15.314011  287206 start.go:83] releasing machines lock for "no-preload-361053", held for 4.670641336s
	I1212 01:33:15.314079  287206 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-361053
	I1212 01:33:15.332761  287206 ssh_runner.go:195] Run: cat /version.json
	I1212 01:33:15.332818  287206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-361053
	I1212 01:33:15.333070  287206 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:33:15.333129  287206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-361053
	I1212 01:33:15.357886  287206 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/no-preload-361053/id_rsa Username:docker}
	I1212 01:33:15.373191  287206 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/no-preload-361053/id_rsa Username:docker}
	I1212 01:33:15.462718  287206 ssh_runner.go:195] Run: systemctl --version
	I1212 01:33:15.559571  287206 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:33:15.564162  287206 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:33:15.564271  287206 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:33:15.572295  287206 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 01:33:15.572323  287206 start.go:496] detecting cgroup driver to use...
	I1212 01:33:15.572376  287206 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 01:33:15.572457  287206 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 01:33:15.590265  287206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 01:33:15.603931  287206 docker.go:218] disabling cri-docker service (if available) ...
	I1212 01:33:15.604040  287206 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:33:15.619709  287206 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:33:15.633120  287206 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:33:15.745120  287206 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:33:15.856267  287206 docker.go:234] disabling docker service ...
	I1212 01:33:15.856362  287206 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:33:15.872142  287206 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:33:15.885538  287206 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:33:16.007318  287206 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:33:16.145250  287206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:33:16.158078  287206 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:33:16.173659  287206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 01:33:16.183387  287206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 01:33:16.192439  287206 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 01:33:16.192510  287206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 01:33:16.201771  287206 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 01:33:16.210383  287206 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 01:33:16.219183  287206 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 01:33:16.227825  287206 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:33:16.236204  287206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 01:33:16.245075  287206 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 01:33:16.253975  287206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 01:33:16.263051  287206 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:33:16.271105  287206 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:33:16.278773  287206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:33:16.395685  287206 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 01:33:16.502787  287206 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1212 01:33:16.502918  287206 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1212 01:33:16.506854  287206 start.go:564] Will wait 60s for crictl version
	I1212 01:33:16.506959  287206 ssh_runner.go:195] Run: which crictl
	I1212 01:33:16.510418  287206 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 01:33:16.536180  287206 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1212 01:33:16.536315  287206 ssh_runner.go:195] Run: containerd --version
	I1212 01:33:16.557674  287206 ssh_runner.go:195] Run: containerd --version
	I1212 01:33:16.585134  287206 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1212 01:33:16.587946  287206 cli_runner.go:164] Run: docker network inspect no-preload-361053 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 01:33:16.609867  287206 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1212 01:33:16.613918  287206 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:33:16.623744  287206 kubeadm.go:884] updating cluster {Name:no-preload-361053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-361053 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:33:16.623857  287206 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 01:33:16.623916  287206 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:33:16.650653  287206 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 01:33:16.650674  287206 cache_images.go:86] Images are preloaded, skipping loading
	I1212 01:33:16.650681  287206 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1212 01:33:16.650792  287206 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-361053 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-361053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:33:16.650867  287206 ssh_runner.go:195] Run: sudo crictl info
	I1212 01:33:16.676358  287206 cni.go:84] Creating CNI manager for ""
	I1212 01:33:16.676391  287206 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 01:33:16.676434  287206 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 01:33:16.676473  287206 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-361053 NodeName:no-preload-361053 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 01:33:16.676614  287206 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-361053"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:33:16.676692  287206 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 01:33:16.684549  287206 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 01:33:16.684631  287206 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:33:16.692278  287206 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1212 01:33:16.704678  287206 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 01:33:16.717453  287206 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1212 01:33:16.730349  287206 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1212 01:33:16.733792  287206 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:33:16.743217  287206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:33:16.879123  287206 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:33:16.896403  287206 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053 for IP: 192.168.85.2
	I1212 01:33:16.896424  287206 certs.go:195] generating shared ca certs ...
	I1212 01:33:16.896440  287206 certs.go:227] acquiring lock for ca certs: {Name:mk18ed2fce74cbc4ee01c0f71e2dbdd98ccce1cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:33:16.896611  287206 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key
	I1212 01:33:16.896673  287206 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key
	I1212 01:33:16.896685  287206 certs.go:257] generating profile certs ...
	I1212 01:33:16.896802  287206 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/client.key
	I1212 01:33:16.896884  287206 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/apiserver.key.40e68572
	I1212 01:33:16.896936  287206 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/proxy-client.key
	I1212 01:33:16.897085  287206 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem (1338 bytes)
	W1212 01:33:16.897122  287206 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290_empty.pem, impossibly tiny 0 bytes
	I1212 01:33:16.897140  287206 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 01:33:16.897182  287206 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem (1082 bytes)
	I1212 01:33:16.897211  287206 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:33:16.897253  287206 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem (1675 bytes)
	I1212 01:33:16.897323  287206 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem (1708 bytes)
	I1212 01:33:16.898045  287206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:33:16.917558  287206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 01:33:16.936420  287206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:33:16.954703  287206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:33:16.973775  287206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 01:33:16.993771  287206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 01:33:17.013800  287206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:33:17.032752  287206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/no-preload-361053/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 01:33:17.050974  287206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /usr/share/ca-certificates/42902.pem (1708 bytes)
	I1212 01:33:17.069067  287206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:33:17.086383  287206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem --> /usr/share/ca-certificates/4290.pem (1338 bytes)
	I1212 01:33:17.103777  287206 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:33:17.116500  287206 ssh_runner.go:195] Run: openssl version
	I1212 01:33:17.123250  287206 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42902.pem
	I1212 01:33:17.130602  287206 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42902.pem /etc/ssl/certs/42902.pem
	I1212 01:33:17.138023  287206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42902.pem
	I1212 01:33:17.141876  287206 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:06 /usr/share/ca-certificates/42902.pem
	I1212 01:33:17.141967  287206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42902.pem
	I1212 01:33:17.183155  287206 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 01:33:17.190531  287206 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:33:17.197720  287206 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 01:33:17.205424  287206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:33:17.209634  287206 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:33:17.209717  287206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:33:17.250661  287206 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 01:33:17.257979  287206 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4290.pem
	I1212 01:33:17.265084  287206 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4290.pem /etc/ssl/certs/4290.pem
	I1212 01:33:17.272550  287206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4290.pem
	I1212 01:33:17.276176  287206 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:06 /usr/share/ca-certificates/4290.pem
	I1212 01:33:17.276244  287206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4290.pem
	I1212 01:33:17.316946  287206 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 01:33:17.324295  287206 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:33:17.327973  287206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 01:33:17.368953  287206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 01:33:17.409868  287206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 01:33:17.453118  287206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 01:33:17.504589  287206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 01:33:17.551985  287206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 01:33:17.599976  287206 kubeadm.go:401] StartCluster: {Name:no-preload-361053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-361053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:33:17.600060  287206 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1212 01:33:17.600116  287206 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:33:17.627743  287206 cri.go:89] found id: ""
	I1212 01:33:17.627848  287206 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:33:17.635686  287206 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 01:33:17.635706  287206 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 01:33:17.635790  287206 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 01:33:17.642948  287206 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 01:33:17.643377  287206 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-361053" does not appear in /home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 01:33:17.643480  287206 kubeconfig.go:62] /home/jenkins/minikube-integration/22101-2343/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-361053" cluster setting kubeconfig missing "no-preload-361053" context setting]
	I1212 01:33:17.643818  287206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/kubeconfig: {Name:mk3e2cbffa089f0a26351f5f6a49b8ed3b66edd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:33:17.645054  287206 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 01:33:17.652754  287206 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1212 01:33:17.652837  287206 kubeadm.go:602] duration metric: took 17.12476ms to restartPrimaryControlPlane
	I1212 01:33:17.652856  287206 kubeadm.go:403] duration metric: took 52.888265ms to StartCluster
	I1212 01:33:17.652873  287206 settings.go:142] acquiring lock: {Name:mk6dd4250df69aeba4752e9f33aeef37272375c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:33:17.652935  287206 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 01:33:17.654183  287206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/kubeconfig: {Name:mk3e2cbffa089f0a26351f5f6a49b8ed3b66edd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:33:17.654577  287206 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1212 01:33:17.656196  287206 config.go:182] Loaded profile config "no-preload-361053": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 01:33:17.656293  287206 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 01:33:17.656907  287206 addons.go:70] Setting storage-provisioner=true in profile "no-preload-361053"
	I1212 01:33:17.656987  287206 addons.go:239] Setting addon storage-provisioner=true in "no-preload-361053"
	I1212 01:33:17.657033  287206 host.go:66] Checking if "no-preload-361053" exists ...
	I1212 01:33:17.657291  287206 addons.go:70] Setting dashboard=true in profile "no-preload-361053"
	I1212 01:33:17.657329  287206 addons.go:239] Setting addon dashboard=true in "no-preload-361053"
	W1212 01:33:17.657367  287206 addons.go:248] addon dashboard should already be in state true
	I1212 01:33:17.657411  287206 host.go:66] Checking if "no-preload-361053" exists ...
	I1212 01:33:17.658033  287206 cli_runner.go:164] Run: docker container inspect no-preload-361053 --format={{.State.Status}}
	I1212 01:33:17.658984  287206 addons.go:70] Setting default-storageclass=true in profile "no-preload-361053"
	I1212 01:33:17.659056  287206 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-361053"
	I1212 01:33:17.659163  287206 out.go:179] * Verifying Kubernetes components...
	I1212 01:33:17.659657  287206 cli_runner.go:164] Run: docker container inspect no-preload-361053 --format={{.State.Status}}
	I1212 01:33:17.659918  287206 cli_runner.go:164] Run: docker container inspect no-preload-361053 --format={{.State.Status}}
	I1212 01:33:17.663168  287206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:33:17.699556  287206 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:33:17.702548  287206 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:33:17.702568  287206 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 01:33:17.702633  287206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-361053
	I1212 01:33:17.707904  287206 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1212 01:33:17.712570  287206 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1212 01:33:17.715424  287206 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1212 01:33:17.715452  287206 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1212 01:33:17.715527  287206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-361053
	I1212 01:33:17.716395  287206 addons.go:239] Setting addon default-storageclass=true in "no-preload-361053"
	I1212 01:33:17.716432  287206 host.go:66] Checking if "no-preload-361053" exists ...
	I1212 01:33:17.716844  287206 cli_runner.go:164] Run: docker container inspect no-preload-361053 --format={{.State.Status}}
	I1212 01:33:17.757307  287206 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/no-preload-361053/id_rsa Username:docker}
	I1212 01:33:17.780041  287206 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 01:33:17.780062  287206 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 01:33:17.780201  287206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-361053
	I1212 01:33:17.787971  287206 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/no-preload-361053/id_rsa Username:docker}
	I1212 01:33:17.824270  287206 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/no-preload-361053/id_rsa Username:docker}
	I1212 01:33:17.914381  287206 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:33:17.932340  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:33:17.963955  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:33:17.997943  287206 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1212 01:33:17.997970  287206 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1212 01:33:18.029336  287206 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1212 01:33:18.029363  287206 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1212 01:33:18.049546  287206 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1212 01:33:18.049613  287206 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1212 01:33:18.063361  287206 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1212 01:33:18.063384  287206 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1212 01:33:18.077187  287206 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1212 01:33:18.077211  287206 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1212 01:33:18.090368  287206 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1212 01:33:18.090397  287206 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1212 01:33:18.104111  287206 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1212 01:33:18.104141  287206 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1212 01:33:18.117846  287206 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1212 01:33:18.117869  287206 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1212 01:33:18.130797  287206 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 01:33:18.130820  287206 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1212 01:33:18.144585  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 01:33:18.535208  287206 node_ready.go:35] waiting up to 6m0s for node "no-preload-361053" to be "Ready" ...
	W1212 01:33:18.535668  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:18.535741  287206 retry.go:31] will retry after 176.168279ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:33:18.535830  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:18.535866  287206 retry.go:31] will retry after 310.631399ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:33:18.536093  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:18.536119  287206 retry.go:31] will retry after 343.133583ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:18.712568  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:33:18.773707  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:18.773739  287206 retry.go:31] will retry after 503.490188ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:18.847154  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:33:18.879640  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:33:18.920064  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:18.920144  287206 retry.go:31] will retry after 545.970645ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:33:18.950800  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:18.950834  287206 retry.go:31] will retry after 319.954632ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:19.271042  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 01:33:19.278476  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:33:19.399940  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:19.399978  287206 retry.go:31] will retry after 290.065244ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:33:19.400038  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:19.400050  287206 retry.go:31] will retry after 299.213835ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:19.466369  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:33:19.524517  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:19.524549  287206 retry.go:31] will retry after 743.245184ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:19.690541  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 01:33:19.700168  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:33:19.757922  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:19.758015  287206 retry.go:31] will retry after 985.188119ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:33:19.779719  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:19.779761  287206 retry.go:31] will retry after 704.931485ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:20.267995  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:33:20.329699  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:20.329775  287206 retry.go:31] will retry after 765.58633ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:20.485196  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:33:20.536023  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:33:20.550357  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:20.550436  287206 retry.go:31] will retry after 1.819808593s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:20.743955  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:33:20.831697  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:20.831734  287206 retry.go:31] will retry after 930.762916ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:21.095851  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:33:21.157009  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:21.157042  287206 retry.go:31] will retry after 1.605590789s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:21.763111  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:33:21.825538  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:21.825574  287206 retry.go:31] will retry after 2.503052767s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:22.370497  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:33:22.431275  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:22.431307  287206 retry.go:31] will retry after 2.355012393s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:22.763437  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:33:22.850160  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:22.850194  287206 retry.go:31] will retry after 1.879850762s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:33:23.035858  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:33:24.329354  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:33:24.389132  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:24.389164  287206 retry.go:31] will retry after 2.014894624s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:24.731243  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:33:24.786964  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:33:24.789370  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:24.789397  287206 retry.go:31] will retry after 4.117004363s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:33:24.843221  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:24.843251  287206 retry.go:31] will retry after 1.752927223s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:33:25.535881  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:33:26.405127  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:33:26.464187  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:26.464220  287206 retry.go:31] will retry after 5.197320965s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:26.596983  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:33:26.656070  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:26.656104  287206 retry.go:31] will retry after 5.533382625s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:33:28.035833  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:33:28.907563  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:33:28.966861  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:28.966896  287206 retry.go:31] will retry after 5.418423295s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:33:30.036974  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:33:30.663739  276743 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001214099s
	I1212 01:33:30.663765  276743 kubeadm.go:319] 
	I1212 01:33:30.663824  276743 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 01:33:30.664225  276743 kubeadm.go:319] 	- The kubelet is not running
	I1212 01:33:30.664463  276743 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 01:33:30.664470  276743 kubeadm.go:319] 
	I1212 01:33:30.664859  276743 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 01:33:30.664924  276743 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 01:33:30.664997  276743 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 01:33:30.665002  276743 kubeadm.go:319] 
	I1212 01:33:30.670247  276743 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 01:33:30.670737  276743 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 01:33:30.670876  276743 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:33:30.671132  276743 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 01:33:30.671145  276743 kubeadm.go:319] 
	I1212 01:33:30.671240  276743 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 01:33:30.671353  276743 kubeadm.go:403] duration metric: took 8m6.695748826s to StartCluster
	I1212 01:33:30.671412  276743 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:33:30.671514  276743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:33:30.695843  276743 cri.go:89] found id: ""
	I1212 01:33:30.695865  276743 logs.go:282] 0 containers: []
	W1212 01:33:30.695874  276743 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:33:30.695882  276743 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:33:30.695947  276743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:33:30.721320  276743 cri.go:89] found id: ""
	I1212 01:33:30.721346  276743 logs.go:282] 0 containers: []
	W1212 01:33:30.721355  276743 logs.go:284] No container was found matching "etcd"
	I1212 01:33:30.721361  276743 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:33:30.721447  276743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:33:30.745399  276743 cri.go:89] found id: ""
	I1212 01:33:30.745432  276743 logs.go:282] 0 containers: []
	W1212 01:33:30.745441  276743 logs.go:284] No container was found matching "coredns"
	I1212 01:33:30.745447  276743 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:33:30.745544  276743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:33:30.770020  276743 cri.go:89] found id: ""
	I1212 01:33:30.770053  276743 logs.go:282] 0 containers: []
	W1212 01:33:30.770062  276743 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:33:30.770082  276743 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:33:30.770166  276743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:33:30.793304  276743 cri.go:89] found id: ""
	I1212 01:33:30.793329  276743 logs.go:282] 0 containers: []
	W1212 01:33:30.793338  276743 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:33:30.793344  276743 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:33:30.793405  276743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:33:30.821216  276743 cri.go:89] found id: ""
	I1212 01:33:30.821286  276743 logs.go:282] 0 containers: []
	W1212 01:33:30.821295  276743 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:33:30.821302  276743 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:33:30.821374  276743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:33:30.849092  276743 cri.go:89] found id: ""
	I1212 01:33:30.849118  276743 logs.go:282] 0 containers: []
	W1212 01:33:30.849127  276743 logs.go:284] No container was found matching "kindnet"
	I1212 01:33:30.849160  276743 logs.go:123] Gathering logs for kubelet ...
	I1212 01:33:30.849178  276743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:33:30.908511  276743 logs.go:123] Gathering logs for dmesg ...
	I1212 01:33:30.908546  276743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:33:30.921702  276743 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:33:30.921728  276743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:33:30.986459  276743 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:33:30.978227    4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:33:30.978917    4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:33:30.980428    4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:33:30.980954    4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:33:30.982546    4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:33:30.978227    4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:33:30.978917    4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:33:30.980428    4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:33:30.980954    4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:33:30.982546    4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:33:30.986482  276743 logs.go:123] Gathering logs for containerd ...
	I1212 01:33:30.986494  276743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:33:31.026654  276743 logs.go:123] Gathering logs for container status ...
	I1212 01:33:31.026689  276743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:33:31.065726  276743 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001214099s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 01:33:31.065772  276743 out.go:285] * 
	W1212 01:33:31.065854  276743 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001214099s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 01:33:31.065866  276743 out.go:285] * 
	W1212 01:33:31.067985  276743 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 01:33:31.073102  276743 out.go:203] 
	W1212 01:33:31.076901  276743 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001214099s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 01:33:31.076950  276743 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 01:33:31.076972  276743 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 01:33:31.079948  276743 out.go:203] 
	I1212 01:33:31.661672  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:33:31.760812  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:31.760847  287206 retry.go:31] will retry after 8.18905348s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:32.189837  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:33:32.273064  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:32.273096  287206 retry.go:31] will retry after 6.81084135s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:33:32.535806  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:33:34.386064  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:33:34.462330  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:34.462361  287206 retry.go:31] will retry after 6.305262233s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:33:34.536061  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:33:37.035955  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:33:39.036556  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:33:39.084930  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:33:39.143830  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:39.143862  287206 retry.go:31] will retry after 12.343488003s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:39.950184  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:33:40.025519  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:40.025559  287206 retry.go:31] will retry after 5.922815184s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:40.768598  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:33:40.846292  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:40.846323  287206 retry.go:31] will retry after 13.102314865s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:33:41.536492  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:33:43.536575  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:33:45.949469  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:33:46.015241  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:46.015277  287206 retry.go:31] will retry after 13.405032383s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:33:46.036443  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:33:48.036746  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:33:50.536732  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:33:51.488392  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:33:51.559260  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:51.559300  287206 retry.go:31] will retry after 18.362274333s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:33:53.036486  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:33:53.949110  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:33:54.011238  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:54.011276  287206 retry.go:31] will retry after 19.774665037s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:33:55.536392  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:33:57.536557  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:33:59.421135  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:33:59.485322  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:33:59.485358  287206 retry.go:31] will retry after 11.142105361s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:34:00.038446  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:34:02.536540  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:34:04.536688  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:34:07.036629  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:34:09.536438  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:34:09.921829  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:34:09.985846  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:34:09.985875  287206 retry.go:31] will retry after 18.589744512s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:34:10.627648  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:34:10.686876  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:34:10.686912  287206 retry.go:31] will retry after 19.942061986s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:34:11.536631  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:34:13.787002  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:34:13.855652  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:34:13.855679  287206 retry.go:31] will retry after 16.508119977s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:34:14.036392  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:34:16.036509  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:34:18.036746  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:34:20.535704  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:34:22.536639  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:34:25.036477  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:34:27.536451  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:34:28.576798  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:34:28.636179  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:34:28.636209  287206 retry.go:31] will retry after 29.151273891s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:34:29.536571  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:34:30.364127  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:34:30.423592  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:34:30.423622  287206 retry.go:31] will retry after 42.216578771s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:34:30.629600  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:34:30.691702  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:34:30.691800  287206 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1212 01:34:32.036503  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:34:34.036641  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:34:36.036848  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:34:38.536526  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:34:41.035818  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:34:43.036531  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:34:45.036798  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:34:47.536472  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:34:49.536610  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:34:52.036490  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:34:54.036574  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:34:56.536420  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:34:57.788055  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:34:57.848257  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:34:57.848347  287206 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1212 01:34:59.036628  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:01.535862  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:03.536510  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.884168277Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.884235388Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.884325300Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.884398827Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.884472510Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.884537996Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.884594398Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.884658768Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.884723680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.884808743Z" level=info msg="Connect containerd service"
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.885150498Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.885821438Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.897230715Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.897398782Z" level=info msg="Start subscribing containerd event"
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.897528104Z" level=info msg="Start recovering state"
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.897473433Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.936318243Z" level=info msg="Start event monitor"
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.936517777Z" level=info msg="Start cni network conf syncer for default"
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.936583329Z" level=info msg="Start streaming server"
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.936647608Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.936703617Z" level=info msg="runtime interface starting up..."
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.936753997Z" level=info msg="starting plugins..."
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.936827064Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 01:25:21 newest-cni-256959 containerd[758]: time="2025-12-12T01:25:21.937025409Z" level=info msg="containerd successfully booted in 0.077505s"
	Dec 12 01:25:21 newest-cni-256959 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:35:09.249953    5886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:35:09.250625    5886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:35:09.252246    5886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:35:09.252746    5886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:35:09.254395    5886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec11 23:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014465] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.504479] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.038126] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.726220] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.947343] kauditd_printk_skb: 36 callbacks suppressed
	[Dec12 00:40] hrtimer: interrupt took 11339963 ns
	
	
	==> kernel <==
	 01:35:09 up  2:17,  0 user,  load average: 0.37, 0.82, 1.61
	Linux newest-cni-256959 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 01:35:06 newest-cni-256959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:35:06 newest-cni-256959 kubelet[5765]: E1212 01:35:06.334872    5765 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:35:06 newest-cni-256959 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:35:06 newest-cni-256959 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:35:07 newest-cni-256959 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 448.
	Dec 12 01:35:07 newest-cni-256959 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:35:07 newest-cni-256959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:35:07 newest-cni-256959 kubelet[5771]: E1212 01:35:07.082418    5771 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:35:07 newest-cni-256959 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:35:07 newest-cni-256959 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:35:07 newest-cni-256959 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 449.
	Dec 12 01:35:07 newest-cni-256959 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:35:07 newest-cni-256959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:35:07 newest-cni-256959 kubelet[5777]: E1212 01:35:07.827118    5777 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:35:07 newest-cni-256959 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:35:07 newest-cni-256959 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:35:08 newest-cni-256959 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 450.
	Dec 12 01:35:08 newest-cni-256959 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:35:08 newest-cni-256959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:35:08 newest-cni-256959 kubelet[5802]: E1212 01:35:08.557824    5802 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:35:08 newest-cni-256959 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:35:08 newest-cni-256959 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:35:09 newest-cni-256959 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 451.
	Dec 12 01:35:09 newest-cni-256959 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:35:09 newest-cni-256959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-256959 -n newest-cni-256959
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-256959 -n newest-cni-256959: exit status 6 (361.162714ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 01:35:09.774798  291156 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-256959" does not appear in /home/jenkins/minikube-integration/22101-2343/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "newest-cni-256959" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (97.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (374.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-256959 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1212 01:35:50.110154    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/old-k8s-version-147581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:35:57.042311    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:36:18.647936    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/default-k8s-diff-port-971096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:38:41.615517    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:38:52.051490    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p newest-cni-256959 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 105 (6m9.204863076s)

                                                
                                                
-- stdout --
	* [newest-cni-256959] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22101
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "newest-cni-256959" primary control-plane node in "newest-cni-256959" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 01:35:11.336080  291455 out.go:360] Setting OutFile to fd 1 ...
	I1212 01:35:11.336277  291455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:35:11.336290  291455 out.go:374] Setting ErrFile to fd 2...
	I1212 01:35:11.336296  291455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:35:11.336566  291455 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	I1212 01:35:11.336950  291455 out.go:368] Setting JSON to false
	I1212 01:35:11.337843  291455 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8258,"bootTime":1765495054,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1212 01:35:11.337913  291455 start.go:143] virtualization:  
	I1212 01:35:11.341103  291455 out.go:179] * [newest-cni-256959] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 01:35:11.345273  291455 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 01:35:11.345376  291455 notify.go:221] Checking for updates...
	I1212 01:35:11.351231  291455 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 01:35:11.354134  291455 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 01:35:11.357086  291455 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	I1212 01:35:11.359981  291455 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 01:35:11.363090  291455 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 01:35:11.366381  291455 config.go:182] Loaded profile config "newest-cni-256959": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 01:35:11.367076  291455 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 01:35:11.397719  291455 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 01:35:11.397845  291455 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:35:11.450218  291455 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 01:35:11.441400779 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:35:11.450324  291455 docker.go:319] overlay module found
	I1212 01:35:11.453495  291455 out.go:179] * Using the docker driver based on existing profile
	I1212 01:35:11.456257  291455 start.go:309] selected driver: docker
	I1212 01:35:11.456272  291455 start.go:927] validating driver "docker" against &{Name:newest-cni-256959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:35:11.456385  291455 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 01:35:11.457105  291455 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:35:11.512167  291455 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 01:35:11.503270098 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:35:11.512501  291455 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1212 01:35:11.512533  291455 cni.go:84] Creating CNI manager for ""
	I1212 01:35:11.512581  291455 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 01:35:11.512620  291455 start.go:353] cluster config:
	{Name:newest-cni-256959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:35:11.517595  291455 out.go:179] * Starting "newest-cni-256959" primary control-plane node in "newest-cni-256959" cluster
	I1212 01:35:11.520355  291455 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1212 01:35:11.523510  291455 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 01:35:11.526310  291455 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 01:35:11.526350  291455 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1212 01:35:11.526380  291455 cache.go:65] Caching tarball of preloaded images
	I1212 01:35:11.526401  291455 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 01:35:11.526463  291455 preload.go:238] Found /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1212 01:35:11.526474  291455 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1212 01:35:11.526577  291455 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/config.json ...
	I1212 01:35:11.545949  291455 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 01:35:11.545972  291455 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 01:35:11.545990  291455 cache.go:243] Successfully downloaded all kic artifacts
	I1212 01:35:11.546021  291455 start.go:360] acquireMachinesLock for newest-cni-256959: {Name:mke4c35c218ad59b1da2c46074b57e71134fc7be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:35:11.546106  291455 start.go:364] duration metric: took 61.449µs to acquireMachinesLock for "newest-cni-256959"
	I1212 01:35:11.546128  291455 start.go:96] Skipping create...Using existing machine configuration
	I1212 01:35:11.546140  291455 fix.go:54] fixHost starting: 
	I1212 01:35:11.546394  291455 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Status}}
	I1212 01:35:11.562986  291455 fix.go:112] recreateIfNeeded on newest-cni-256959: state=Stopped err=<nil>
	W1212 01:35:11.563044  291455 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 01:35:11.566225  291455 out.go:252] * Restarting existing docker container for "newest-cni-256959" ...
	I1212 01:35:11.566307  291455 cli_runner.go:164] Run: docker start newest-cni-256959
	I1212 01:35:11.824711  291455 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Status}}
	I1212 01:35:11.850549  291455 kic.go:430] container "newest-cni-256959" state is running.
	I1212 01:35:11.850948  291455 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-256959
	I1212 01:35:11.874496  291455 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/config.json ...
	I1212 01:35:11.875491  291455 machine.go:94] provisionDockerMachine start ...
	I1212 01:35:11.875566  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:11.904543  291455 main.go:143] libmachine: Using SSH client type: native
	I1212 01:35:11.904867  291455 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1212 01:35:11.904894  291455 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 01:35:11.905649  291455 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 01:35:15.062841  291455 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-256959
	
	I1212 01:35:15.062884  291455 ubuntu.go:182] provisioning hostname "newest-cni-256959"
	I1212 01:35:15.062966  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:15.081374  291455 main.go:143] libmachine: Using SSH client type: native
	I1212 01:35:15.081715  291455 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1212 01:35:15.081732  291455 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-256959 && echo "newest-cni-256959" | sudo tee /etc/hostname
	I1212 01:35:15.244594  291455 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-256959
	
	I1212 01:35:15.244717  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:15.262885  291455 main.go:143] libmachine: Using SSH client type: native
	I1212 01:35:15.263226  291455 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1212 01:35:15.263249  291455 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-256959' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-256959/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-256959' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:35:15.415381  291455 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:35:15.415407  291455 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-2343/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-2343/.minikube}
	I1212 01:35:15.415450  291455 ubuntu.go:190] setting up certificates
	I1212 01:35:15.415469  291455 provision.go:84] configureAuth start
	I1212 01:35:15.415542  291455 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-256959
	I1212 01:35:15.432184  291455 provision.go:143] copyHostCerts
	I1212 01:35:15.432260  291455 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem, removing ...
	I1212 01:35:15.432274  291455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem
	I1212 01:35:15.432771  291455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem (1082 bytes)
	I1212 01:35:15.432891  291455 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem, removing ...
	I1212 01:35:15.432905  291455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem
	I1212 01:35:15.432935  291455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem (1123 bytes)
	I1212 01:35:15.433008  291455 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem, removing ...
	I1212 01:35:15.433018  291455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem
	I1212 01:35:15.433044  291455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem (1675 bytes)
	I1212 01:35:15.433100  291455 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem org=jenkins.newest-cni-256959 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-256959]
	I1212 01:35:15.664957  291455 provision.go:177] copyRemoteCerts
	I1212 01:35:15.665025  291455 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:35:15.665084  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:15.682010  291455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:35:15.786690  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 01:35:15.804464  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 01:35:15.821597  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 01:35:15.838753  291455 provision.go:87] duration metric: took 423.263374ms to configureAuth
	I1212 01:35:15.838782  291455 ubuntu.go:206] setting minikube options for container-runtime
	I1212 01:35:15.839040  291455 config.go:182] Loaded profile config "newest-cni-256959": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 01:35:15.839053  291455 machine.go:97] duration metric: took 3.963544394s to provisionDockerMachine
	I1212 01:35:15.839061  291455 start.go:293] postStartSetup for "newest-cni-256959" (driver="docker")
	I1212 01:35:15.839072  291455 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:35:15.839119  291455 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:35:15.839169  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:15.855712  291455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:35:15.959303  291455 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:35:15.962341  291455 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 01:35:15.962368  291455 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 01:35:15.962380  291455 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/addons for local assets ...
	I1212 01:35:15.962429  291455 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/files for local assets ...
	I1212 01:35:15.962509  291455 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem -> 42902.pem in /etc/ssl/certs
	I1212 01:35:15.962609  291455 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:35:15.969472  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /etc/ssl/certs/42902.pem (1708 bytes)
	I1212 01:35:15.986194  291455 start.go:296] duration metric: took 147.119175ms for postStartSetup
	I1212 01:35:15.986304  291455 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 01:35:15.986375  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:16.005019  291455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:35:16.107859  291455 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 01:35:16.112663  291455 fix.go:56] duration metric: took 4.566516262s for fixHost
	I1212 01:35:16.112691  291455 start.go:83] releasing machines lock for "newest-cni-256959", held for 4.566573288s
	I1212 01:35:16.112760  291455 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-256959
	I1212 01:35:16.129477  291455 ssh_runner.go:195] Run: cat /version.json
	I1212 01:35:16.129531  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:16.129775  291455 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:35:16.129824  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:16.153158  291455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:35:16.155921  291455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:35:16.367474  291455 ssh_runner.go:195] Run: systemctl --version
	I1212 01:35:16.373832  291455 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:35:16.378022  291455 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:35:16.378104  291455 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:35:16.385747  291455 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 01:35:16.385772  291455 start.go:496] detecting cgroup driver to use...
	I1212 01:35:16.385819  291455 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 01:35:16.385882  291455 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 01:35:16.403657  291455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 01:35:16.417469  291455 docker.go:218] disabling cri-docker service (if available) ...
	I1212 01:35:16.417564  291455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:35:16.433612  291455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:35:16.446861  291455 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:35:16.554018  291455 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:35:16.672193  291455 docker.go:234] disabling docker service ...
	I1212 01:35:16.672283  291455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:35:16.687238  291455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:35:16.700659  291455 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:35:16.812563  291455 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:35:16.928270  291455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:35:16.941185  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:35:16.957067  291455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 01:35:16.966276  291455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 01:35:16.975221  291455 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 01:35:16.975292  291455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 01:35:16.984294  291455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 01:35:16.993328  291455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 01:35:17.004796  291455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 01:35:17.015289  291455 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:35:17.023922  291455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 01:35:17.036658  291455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 01:35:17.046732  291455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 01:35:17.056354  291455 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:35:17.064063  291455 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:35:17.071833  291455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:35:17.188012  291455 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 01:35:17.306110  291455 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1212 01:35:17.306231  291455 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1212 01:35:17.309882  291455 start.go:564] Will wait 60s for crictl version
	I1212 01:35:17.309968  291455 ssh_runner.go:195] Run: which crictl
	I1212 01:35:17.313475  291455 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 01:35:17.340045  291455 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1212 01:35:17.340140  291455 ssh_runner.go:195] Run: containerd --version
	I1212 01:35:17.360301  291455 ssh_runner.go:195] Run: containerd --version
	I1212 01:35:17.385714  291455 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1212 01:35:17.388490  291455 cli_runner.go:164] Run: docker network inspect newest-cni-256959 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 01:35:17.404979  291455 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1212 01:35:17.409350  291455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:35:17.422610  291455 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1212 01:35:17.425426  291455 kubeadm.go:884] updating cluster {Name:newest-cni-256959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:35:17.425578  291455 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 01:35:17.425675  291455 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:35:17.450191  291455 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 01:35:17.450217  291455 containerd.go:534] Images already preloaded, skipping extraction
	I1212 01:35:17.450277  291455 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:35:17.474185  291455 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 01:35:17.474220  291455 cache_images.go:86] Images are preloaded, skipping loading
	I1212 01:35:17.474228  291455 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1212 01:35:17.474373  291455 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-256959 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:35:17.474472  291455 ssh_runner.go:195] Run: sudo crictl info
	I1212 01:35:17.498662  291455 cni.go:84] Creating CNI manager for ""
	I1212 01:35:17.498685  291455 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 01:35:17.498869  291455 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1212 01:35:17.498905  291455 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-256959 NodeName:newest-cni-256959 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 01:35:17.499182  291455 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-256959"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:35:17.499276  291455 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 01:35:17.511920  291455 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 01:35:17.512017  291455 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:35:17.519602  291455 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1212 01:35:17.532107  291455 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 01:35:17.545262  291455 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1212 01:35:17.557618  291455 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1212 01:35:17.561053  291455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:35:17.570894  291455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:35:17.675958  291455 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:35:17.692695  291455 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959 for IP: 192.168.76.2
	I1212 01:35:17.692715  291455 certs.go:195] generating shared ca certs ...
	I1212 01:35:17.692750  291455 certs.go:227] acquiring lock for ca certs: {Name:mk18ed2fce74cbc4ee01c0f71e2dbdd98ccce1cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:35:17.692911  291455 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key
	I1212 01:35:17.692980  291455 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key
	I1212 01:35:17.692995  291455 certs.go:257] generating profile certs ...
	I1212 01:35:17.693112  291455 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/client.key
	I1212 01:35:17.693202  291455 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.key.b05ecb93
	I1212 01:35:17.693309  291455 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.key
	I1212 01:35:17.693447  291455 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem (1338 bytes)
	W1212 01:35:17.693518  291455 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290_empty.pem, impossibly tiny 0 bytes
	I1212 01:35:17.693536  291455 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 01:35:17.693582  291455 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem (1082 bytes)
	I1212 01:35:17.693632  291455 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:35:17.693666  291455 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem (1675 bytes)
	I1212 01:35:17.693747  291455 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem (1708 bytes)
	I1212 01:35:17.694397  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:35:17.712974  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 01:35:17.738035  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:35:17.758905  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:35:17.776423  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 01:35:17.805243  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 01:35:17.826665  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:35:17.847012  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 01:35:17.868946  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:35:17.887272  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem --> /usr/share/ca-certificates/4290.pem (1338 bytes)
	I1212 01:35:17.904023  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /usr/share/ca-certificates/42902.pem (1708 bytes)
	I1212 01:35:17.920802  291455 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:35:17.933645  291455 ssh_runner.go:195] Run: openssl version
	I1212 01:35:17.939797  291455 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:35:17.946909  291455 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 01:35:17.954537  291455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:35:17.958217  291455 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:35:17.958301  291455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:35:17.998878  291455 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 01:35:18.008093  291455 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4290.pem
	I1212 01:35:18.016725  291455 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4290.pem /etc/ssl/certs/4290.pem
	I1212 01:35:18.025237  291455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4290.pem
	I1212 01:35:18.029387  291455 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:06 /usr/share/ca-certificates/4290.pem
	I1212 01:35:18.029458  291455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4290.pem
	I1212 01:35:18.072423  291455 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 01:35:18.080329  291455 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42902.pem
	I1212 01:35:18.088043  291455 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42902.pem /etc/ssl/certs/42902.pem
	I1212 01:35:18.095703  291455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42902.pem
	I1212 01:35:18.100065  291455 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:06 /usr/share/ca-certificates/42902.pem
	I1212 01:35:18.100135  291455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42902.pem
	I1212 01:35:18.141016  291455 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 01:35:18.148423  291455 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:35:18.152541  291455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 01:35:18.195372  291455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 01:35:18.236073  291455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 01:35:18.276924  291455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 01:35:18.317697  291455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 01:35:18.358213  291455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 01:35:18.400083  291455 kubeadm.go:401] StartCluster: {Name:newest-cni-256959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:35:18.400177  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1212 01:35:18.400236  291455 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:35:18.437669  291455 cri.go:89] found id: ""
	I1212 01:35:18.437744  291455 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:35:18.446134  291455 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 01:35:18.446156  291455 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 01:35:18.446208  291455 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 01:35:18.453928  291455 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 01:35:18.454522  291455 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-256959" does not appear in /home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 01:35:18.454766  291455 kubeconfig.go:62] /home/jenkins/minikube-integration/22101-2343/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-256959" cluster setting kubeconfig missing "newest-cni-256959" context setting]
	I1212 01:35:18.455226  291455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/kubeconfig: {Name:mk3e2cbffa089f0a26351f5f6a49b8ed3b66edd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:35:18.456674  291455 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 01:35:18.464597  291455 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1212 01:35:18.464630  291455 kubeadm.go:602] duration metric: took 18.46826ms to restartPrimaryControlPlane
	I1212 01:35:18.464640  291455 kubeadm.go:403] duration metric: took 64.568702ms to StartCluster
	I1212 01:35:18.464656  291455 settings.go:142] acquiring lock: {Name:mk6dd4250df69aeba4752e9f33aeef37272375c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:35:18.464716  291455 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 01:35:18.465619  291455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/kubeconfig: {Name:mk3e2cbffa089f0a26351f5f6a49b8ed3b66edd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:35:18.465827  291455 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1212 01:35:18.466211  291455 config.go:182] Loaded profile config "newest-cni-256959": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 01:35:18.466236  291455 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 01:35:18.466355  291455 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-256959"
	I1212 01:35:18.466367  291455 addons.go:70] Setting dashboard=true in profile "newest-cni-256959"
	I1212 01:35:18.466371  291455 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-256959"
	I1212 01:35:18.466378  291455 addons.go:239] Setting addon dashboard=true in "newest-cni-256959"
	W1212 01:35:18.466385  291455 addons.go:248] addon dashboard should already be in state true
	I1212 01:35:18.466396  291455 host.go:66] Checking if "newest-cni-256959" exists ...
	I1212 01:35:18.466403  291455 host.go:66] Checking if "newest-cni-256959" exists ...
	I1212 01:35:18.466836  291455 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Status}}
	I1212 01:35:18.466869  291455 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Status}}
	I1212 01:35:18.467337  291455 addons.go:70] Setting default-storageclass=true in profile "newest-cni-256959"
	I1212 01:35:18.467363  291455 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-256959"
	I1212 01:35:18.467641  291455 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Status}}
	I1212 01:35:18.469758  291455 out.go:179] * Verifying Kubernetes components...
	I1212 01:35:18.473053  291455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:35:18.505578  291455 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:35:18.507992  291455 addons.go:239] Setting addon default-storageclass=true in "newest-cni-256959"
	I1212 01:35:18.508032  291455 host.go:66] Checking if "newest-cni-256959" exists ...
	I1212 01:35:18.508443  291455 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Status}}
	I1212 01:35:18.515343  291455 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:35:18.515364  291455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 01:35:18.515428  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:18.518345  291455 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1212 01:35:18.523100  291455 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1212 01:35:18.525972  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1212 01:35:18.526002  291455 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1212 01:35:18.526079  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:18.564602  291455 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 01:35:18.564630  291455 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 01:35:18.564700  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:18.565404  291455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:35:18.592490  291455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:35:18.614974  291455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:35:18.707284  291455 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:35:18.738514  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:35:18.783779  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1212 01:35:18.783804  291455 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1212 01:35:18.797813  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:35:18.817201  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1212 01:35:18.817275  291455 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1212 01:35:18.834247  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1212 01:35:18.834268  291455 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1212 01:35:18.850261  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1212 01:35:18.850281  291455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1212 01:35:18.864878  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1212 01:35:18.864902  291455 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1212 01:35:18.879989  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1212 01:35:18.880012  291455 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1212 01:35:18.893252  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1212 01:35:18.893275  291455 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1212 01:35:18.906457  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1212 01:35:18.906522  291455 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1212 01:35:18.919410  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 01:35:18.919484  291455 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1212 01:35:18.931957  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 01:35:19.295481  291455 api_server.go:52] waiting for apiserver process to appear ...
	W1212 01:35:19.295638  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.295690  291455 retry.go:31] will retry after 249.842732ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:35:19.295768  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.295783  291455 retry.go:31] will retry after 351.420897ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:35:19.296118  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.296142  291455 retry.go:31] will retry after 281.426587ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.296213  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:19.546048  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:35:19.578494  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:35:19.622946  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.623064  291455 retry.go:31] will retry after 277.166543ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.648375  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:35:19.656309  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.656406  291455 retry.go:31] will retry after 462.607475ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:35:19.715463  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.715506  291455 retry.go:31] will retry after 556.232924ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.796674  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:19.900383  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:35:19.963236  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.963266  291455 retry.go:31] will retry after 505.253944ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.119589  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:35:20.186519  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.186613  291455 retry.go:31] will retry after 424.835438ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.272893  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:35:20.296648  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:20.336051  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.336183  291455 retry.go:31] will retry after 483.909657ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.469348  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:35:20.528062  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.528096  291455 retry.go:31] will retry after 804.643976ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.612336  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:35:20.682501  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.682548  291455 retry.go:31] will retry after 558.97301ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.795783  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:20.820454  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:35:20.905698  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.905732  291455 retry.go:31] will retry after 695.755311ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:21.242222  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 01:35:21.295663  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:21.312788  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:21.312824  291455 retry.go:31] will retry after 1.866088371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:21.333223  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:35:21.395495  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:21.395527  291455 retry.go:31] will retry after 1.442265452s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:21.601699  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:35:21.661918  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:21.661958  291455 retry.go:31] will retry after 965.923553ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:21.796193  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:22.296596  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:22.628164  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:35:22.689983  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:22.690024  291455 retry.go:31] will retry after 2.419076287s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:22.796215  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:22.838490  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:35:22.896567  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:22.896595  291455 retry.go:31] will retry after 1.026441386s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:23.180088  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:35:23.242606  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:23.242641  291455 retry.go:31] will retry after 1.447175367s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:23.295985  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:23.795677  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:23.924269  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:35:23.999262  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:23.999301  291455 retry.go:31] will retry after 3.676300513s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:24.295744  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:24.690891  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:35:24.751142  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:24.751178  291455 retry.go:31] will retry after 2.523379824s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:24.796474  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:25.109290  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:35:25.170081  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:25.170117  291455 retry.go:31] will retry after 1.61445699s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:25.296317  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:25.796411  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:26.295885  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:26.784844  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:35:26.796101  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:26.910864  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:26.910893  291455 retry.go:31] will retry after 5.25056634s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:27.275356  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 01:35:27.295815  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:27.348749  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:27.348785  291455 retry.go:31] will retry after 4.97523733s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:27.676221  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:35:27.738144  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:27.738177  291455 retry.go:31] will retry after 5.096436926s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:27.796329  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:28.296194  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:28.795721  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:29.296646  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:29.795689  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:30.295694  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:30.796607  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:31.296202  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:31.795914  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:32.161653  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:35:32.223763  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:32.223796  291455 retry.go:31] will retry after 3.268815276s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:32.296204  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:32.325119  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:35:32.386121  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:32.386153  291455 retry.go:31] will retry after 5.854435808s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:32.796226  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:32.834968  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:35:32.909984  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:32.910017  291455 retry.go:31] will retry after 7.163447884s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:33.296541  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:33.796667  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:34.295628  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:34.796652  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:35.295756  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:35.493366  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:35:35.556021  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:35.556054  291455 retry.go:31] will retry after 12.955659755s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:35.796356  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:36.296236  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:36.796391  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:37.295746  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:37.795722  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:38.241525  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 01:35:38.295983  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:38.315189  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:38.315224  291455 retry.go:31] will retry after 8.402358708s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:38.795800  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:39.296313  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:39.795769  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:40.074570  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:35:40.142371  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:40.142407  291455 retry.go:31] will retry after 11.797804339s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:40.295684  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:40.795715  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:41.295800  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:41.796201  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:42.295677  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:42.795870  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:43.296206  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:43.795818  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:44.295727  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:44.795706  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:45.296501  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:45.795731  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:46.296084  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:46.717860  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:35:46.778291  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:46.778324  291455 retry.go:31] will retry after 11.640937008s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:46.796419  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:47.296365  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:47.796242  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:48.295728  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:48.512617  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:35:48.620306  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:48.620334  291455 retry.go:31] will retry after 20.936993287s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:48.795684  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:49.296228  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:49.796588  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:50.296351  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:50.796261  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:51.296609  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:51.796731  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:51.941351  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:35:52.001637  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:52.001682  291455 retry.go:31] will retry after 15.364088557s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:52.296092  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:52.795636  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:53.296512  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:53.811922  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:54.295780  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:54.795777  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:55.296163  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:55.796273  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:56.295752  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:56.795693  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:57.295887  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:57.796459  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:58.296209  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:58.419661  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:35:58.488403  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:58.488438  291455 retry.go:31] will retry after 29.791340434s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:58.796698  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:59.295744  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:59.796477  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:00.295794  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:00.795759  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:01.296237  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:01.796304  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:02.296424  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:02.795750  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:03.296298  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:03.796668  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:04.296158  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:04.796345  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:05.296665  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:05.796526  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:06.295717  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:06.795806  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:07.296383  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:07.366524  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:36:07.433303  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:36:07.433335  291455 retry.go:31] will retry after 21.959421138s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:36:07.795756  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:08.296562  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:08.795685  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:09.295744  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:09.558068  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:36:09.643748  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:36:09.643785  291455 retry.go:31] will retry after 31.140330108s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:36:09.796018  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:10.295683  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:10.795744  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:11.295780  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:11.795645  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:12.295717  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:12.795762  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:13.296234  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:13.795775  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:14.296543  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:14.796297  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:15.295763  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:15.795884  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:16.296551  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:16.796640  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:17.295760  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:17.796208  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:18.296641  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:18.795858  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:18.795946  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:18.819559  291455 cri.go:89] found id: ""
	I1212 01:36:18.819585  291455 logs.go:282] 0 containers: []
	W1212 01:36:18.819594  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:18.819605  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:18.819671  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:18.843419  291455 cri.go:89] found id: ""
	I1212 01:36:18.843444  291455 logs.go:282] 0 containers: []
	W1212 01:36:18.843453  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:18.843459  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:18.843524  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:18.867870  291455 cri.go:89] found id: ""
	I1212 01:36:18.867894  291455 logs.go:282] 0 containers: []
	W1212 01:36:18.867903  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:18.867910  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:18.867975  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:18.892504  291455 cri.go:89] found id: ""
	I1212 01:36:18.892528  291455 logs.go:282] 0 containers: []
	W1212 01:36:18.892536  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:18.892543  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:18.892614  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:18.916462  291455 cri.go:89] found id: ""
	I1212 01:36:18.916484  291455 logs.go:282] 0 containers: []
	W1212 01:36:18.916493  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:18.916499  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:18.916555  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:18.940793  291455 cri.go:89] found id: ""
	I1212 01:36:18.940818  291455 logs.go:282] 0 containers: []
	W1212 01:36:18.940827  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:18.940833  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:18.940892  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:18.965485  291455 cri.go:89] found id: ""
	I1212 01:36:18.965513  291455 logs.go:282] 0 containers: []
	W1212 01:36:18.965521  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:18.965527  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:18.965585  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:18.990141  291455 cri.go:89] found id: ""
	I1212 01:36:18.990170  291455 logs.go:282] 0 containers: []
	W1212 01:36:18.990179  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:18.990189  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:18.990202  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:19.044826  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:19.044860  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:19.058338  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:19.058373  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:19.121541  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:19.113010    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:19.113711    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:19.115490    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:19.116077    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:19.117640    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:19.113010    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:19.113711    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:19.115490    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:19.116077    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:19.117640    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:19.121602  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:19.121622  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:19.146904  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:19.146941  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:21.678937  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:21.689641  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:21.689710  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:21.722833  291455 cri.go:89] found id: ""
	I1212 01:36:21.722854  291455 logs.go:282] 0 containers: []
	W1212 01:36:21.722862  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:21.722869  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:21.722926  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:21.747286  291455 cri.go:89] found id: ""
	I1212 01:36:21.747323  291455 logs.go:282] 0 containers: []
	W1212 01:36:21.747339  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:21.747346  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:21.747417  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:21.771941  291455 cri.go:89] found id: ""
	I1212 01:36:21.771965  291455 logs.go:282] 0 containers: []
	W1212 01:36:21.771980  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:21.771987  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:21.772052  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:21.801075  291455 cri.go:89] found id: ""
	I1212 01:36:21.801104  291455 logs.go:282] 0 containers: []
	W1212 01:36:21.801113  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:21.801119  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:21.801176  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:21.825561  291455 cri.go:89] found id: ""
	I1212 01:36:21.825587  291455 logs.go:282] 0 containers: []
	W1212 01:36:21.825595  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:21.825601  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:21.825659  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:21.854532  291455 cri.go:89] found id: ""
	I1212 01:36:21.854559  291455 logs.go:282] 0 containers: []
	W1212 01:36:21.854569  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:21.854580  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:21.854640  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:21.879725  291455 cri.go:89] found id: ""
	I1212 01:36:21.879789  291455 logs.go:282] 0 containers: []
	W1212 01:36:21.879814  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:21.879828  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:21.879912  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:21.904405  291455 cri.go:89] found id: ""
	I1212 01:36:21.904428  291455 logs.go:282] 0 containers: []
	W1212 01:36:21.904437  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:21.904446  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:21.904487  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:21.970611  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:21.962223    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:21.962657    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:21.964375    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:21.964860    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:21.966282    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:21.962223    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:21.962657    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:21.964375    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:21.964860    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:21.966282    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:21.970642  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:21.970659  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:21.995425  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:21.995463  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:22.024736  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:22.024767  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:22.082740  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:22.082785  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:24.597828  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:24.608497  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:24.608573  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:24.633951  291455 cri.go:89] found id: ""
	I1212 01:36:24.633978  291455 logs.go:282] 0 containers: []
	W1212 01:36:24.633986  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:24.633992  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:24.634048  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:24.658904  291455 cri.go:89] found id: ""
	I1212 01:36:24.658929  291455 logs.go:282] 0 containers: []
	W1212 01:36:24.658937  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:24.658944  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:24.659026  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:24.683684  291455 cri.go:89] found id: ""
	I1212 01:36:24.683709  291455 logs.go:282] 0 containers: []
	W1212 01:36:24.683718  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:24.683724  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:24.683791  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:24.708745  291455 cri.go:89] found id: ""
	I1212 01:36:24.708770  291455 logs.go:282] 0 containers: []
	W1212 01:36:24.708779  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:24.708786  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:24.708842  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:24.733454  291455 cri.go:89] found id: ""
	I1212 01:36:24.733479  291455 logs.go:282] 0 containers: []
	W1212 01:36:24.733488  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:24.733494  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:24.733551  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:24.761862  291455 cri.go:89] found id: ""
	I1212 01:36:24.761889  291455 logs.go:282] 0 containers: []
	W1212 01:36:24.761898  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:24.761904  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:24.761961  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:24.785388  291455 cri.go:89] found id: ""
	I1212 01:36:24.785415  291455 logs.go:282] 0 containers: []
	W1212 01:36:24.785424  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:24.785430  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:24.785486  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:24.810681  291455 cri.go:89] found id: ""
	I1212 01:36:24.810707  291455 logs.go:282] 0 containers: []
	W1212 01:36:24.810717  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:24.810727  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:24.810743  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:24.865711  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:24.865752  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:24.880399  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:24.880431  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:24.943187  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:24.935391    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:24.936083    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:24.937614    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:24.937904    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:24.939457    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:24.935391    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:24.936083    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:24.937614    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:24.937904    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:24.939457    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:24.943253  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:24.943274  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:24.967790  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:24.967820  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:27.495634  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:27.506605  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:27.506700  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:27.548836  291455 cri.go:89] found id: ""
	I1212 01:36:27.548864  291455 logs.go:282] 0 containers: []
	W1212 01:36:27.548873  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:27.548879  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:27.548953  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:27.600295  291455 cri.go:89] found id: ""
	I1212 01:36:27.600324  291455 logs.go:282] 0 containers: []
	W1212 01:36:27.600334  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:27.600340  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:27.600397  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:27.625951  291455 cri.go:89] found id: ""
	I1212 01:36:27.625979  291455 logs.go:282] 0 containers: []
	W1212 01:36:27.625987  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:27.625993  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:27.626062  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:27.651635  291455 cri.go:89] found id: ""
	I1212 01:36:27.651660  291455 logs.go:282] 0 containers: []
	W1212 01:36:27.651668  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:27.651675  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:27.651734  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:27.676415  291455 cri.go:89] found id: ""
	I1212 01:36:27.676437  291455 logs.go:282] 0 containers: []
	W1212 01:36:27.676446  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:27.676473  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:27.676535  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:27.699845  291455 cri.go:89] found id: ""
	I1212 01:36:27.699868  291455 logs.go:282] 0 containers: []
	W1212 01:36:27.699876  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:27.699883  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:27.699938  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:27.735327  291455 cri.go:89] found id: ""
	I1212 01:36:27.735353  291455 logs.go:282] 0 containers: []
	W1212 01:36:27.735362  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:27.735368  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:27.735428  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:27.759909  291455 cri.go:89] found id: ""
	I1212 01:36:27.759932  291455 logs.go:282] 0 containers: []
	W1212 01:36:27.759940  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:27.759950  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:27.759961  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:27.786638  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:27.786667  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:27.841026  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:27.841058  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:27.854475  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:27.854508  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:27.917832  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:27.909374    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:27.909866    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:27.911432    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:27.911952    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:27.913437    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:27.909374    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:27.909866    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:27.911432    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:27.911952    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:27.913437    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:27.917855  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:27.917867  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:28.286241  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:36:28.389245  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:36:28.389279  291455 retry.go:31] will retry after 46.053342505s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:36:29.393036  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:36:29.455460  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:36:29.455496  291455 retry.go:31] will retry after 47.570792587s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:36:30.443136  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:30.453668  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:30.453743  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:30.480117  291455 cri.go:89] found id: ""
	I1212 01:36:30.480141  291455 logs.go:282] 0 containers: []
	W1212 01:36:30.480149  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:30.480155  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:30.480214  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:30.505432  291455 cri.go:89] found id: ""
	I1212 01:36:30.505460  291455 logs.go:282] 0 containers: []
	W1212 01:36:30.505470  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:30.505478  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:30.505543  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:30.530571  291455 cri.go:89] found id: ""
	I1212 01:36:30.530598  291455 logs.go:282] 0 containers: []
	W1212 01:36:30.530608  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:30.530614  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:30.530675  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:30.587393  291455 cri.go:89] found id: ""
	I1212 01:36:30.587429  291455 logs.go:282] 0 containers: []
	W1212 01:36:30.587439  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:30.587445  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:30.587517  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:30.631827  291455 cri.go:89] found id: ""
	I1212 01:36:30.631894  291455 logs.go:282] 0 containers: []
	W1212 01:36:30.631917  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:30.631941  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:30.632019  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:30.655968  291455 cri.go:89] found id: ""
	I1212 01:36:30.656043  291455 logs.go:282] 0 containers: []
	W1212 01:36:30.656065  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:30.656077  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:30.656143  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:30.680079  291455 cri.go:89] found id: ""
	I1212 01:36:30.680101  291455 logs.go:282] 0 containers: []
	W1212 01:36:30.680110  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:30.680116  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:30.680175  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:30.704249  291455 cri.go:89] found id: ""
	I1212 01:36:30.704324  291455 logs.go:282] 0 containers: []
	W1212 01:36:30.704346  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:30.704365  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:30.704391  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:30.760587  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:30.760620  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:30.774118  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:30.774145  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:30.838730  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:30.831029    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:30.831642    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:30.833120    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:30.833546    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:30.835035    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:30.831029    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:30.831642    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:30.833120    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:30.833546    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:30.835035    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:30.838753  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:30.838765  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:30.863650  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:30.863684  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:33.391024  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:33.401417  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:33.401486  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:33.425243  291455 cri.go:89] found id: ""
	I1212 01:36:33.425265  291455 logs.go:282] 0 containers: []
	W1212 01:36:33.425274  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:33.425280  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:33.425337  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:33.451769  291455 cri.go:89] found id: ""
	I1212 01:36:33.451792  291455 logs.go:282] 0 containers: []
	W1212 01:36:33.451800  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:33.451806  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:33.451869  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:33.476935  291455 cri.go:89] found id: ""
	I1212 01:36:33.476960  291455 logs.go:282] 0 containers: []
	W1212 01:36:33.476968  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:33.476974  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:33.477035  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:33.502755  291455 cri.go:89] found id: ""
	I1212 01:36:33.502781  291455 logs.go:282] 0 containers: []
	W1212 01:36:33.502796  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:33.502802  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:33.502859  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:33.528810  291455 cri.go:89] found id: ""
	I1212 01:36:33.528835  291455 logs.go:282] 0 containers: []
	W1212 01:36:33.528844  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:33.528851  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:33.528915  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:33.559119  291455 cri.go:89] found id: ""
	I1212 01:36:33.559197  291455 logs.go:282] 0 containers: []
	W1212 01:36:33.559219  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:33.559237  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:33.559321  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:33.624518  291455 cri.go:89] found id: ""
	I1212 01:36:33.624547  291455 logs.go:282] 0 containers: []
	W1212 01:36:33.624556  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:33.624562  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:33.624620  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:33.657379  291455 cri.go:89] found id: ""
	I1212 01:36:33.657401  291455 logs.go:282] 0 containers: []
	W1212 01:36:33.657409  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:33.657418  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:33.657428  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:33.713396  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:33.713430  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:33.727420  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:33.727450  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:33.796759  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:33.788822    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:33.789567    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:33.791169    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:33.791683    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:33.792822    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:33.788822    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:33.789567    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:33.791169    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:33.791683    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:33.792822    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:33.796782  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:33.796795  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:33.822210  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:33.822246  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:36.350581  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:36.361065  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:36.361139  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:36.384625  291455 cri.go:89] found id: ""
	I1212 01:36:36.384647  291455 logs.go:282] 0 containers: []
	W1212 01:36:36.384655  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:36.384661  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:36.384721  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:36.409313  291455 cri.go:89] found id: ""
	I1212 01:36:36.409338  291455 logs.go:282] 0 containers: []
	W1212 01:36:36.409347  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:36.409353  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:36.409414  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:36.437773  291455 cri.go:89] found id: ""
	I1212 01:36:36.437796  291455 logs.go:282] 0 containers: []
	W1212 01:36:36.437804  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:36.437811  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:36.437875  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:36.462058  291455 cri.go:89] found id: ""
	I1212 01:36:36.462080  291455 logs.go:282] 0 containers: []
	W1212 01:36:36.462089  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:36.462096  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:36.462158  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:36.485881  291455 cri.go:89] found id: ""
	I1212 01:36:36.485902  291455 logs.go:282] 0 containers: []
	W1212 01:36:36.485911  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:36.485917  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:36.485973  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:36.510249  291455 cri.go:89] found id: ""
	I1212 01:36:36.510318  291455 logs.go:282] 0 containers: []
	W1212 01:36:36.510340  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:36.510362  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:36.510444  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:36.546913  291455 cri.go:89] found id: ""
	I1212 01:36:36.546948  291455 logs.go:282] 0 containers: []
	W1212 01:36:36.546957  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:36.546963  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:36.547067  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:36.604532  291455 cri.go:89] found id: ""
	I1212 01:36:36.604562  291455 logs.go:282] 0 containers: []
	W1212 01:36:36.604571  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:36.604580  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:36.604593  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:36.684036  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:36.674581    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:36.675420    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:36.677203    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:36.677878    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:36.679666    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:36.674581    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:36.675420    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:36.677203    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:36.677878    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:36.679666    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:36.684061  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:36.684074  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:36.709835  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:36.709866  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:36.737742  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:36.737768  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:36.792829  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:36.792864  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:39.307416  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:39.317852  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:39.317952  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:39.342723  291455 cri.go:89] found id: ""
	I1212 01:36:39.342747  291455 logs.go:282] 0 containers: []
	W1212 01:36:39.342756  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:39.342763  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:39.342821  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:39.367433  291455 cri.go:89] found id: ""
	I1212 01:36:39.367472  291455 logs.go:282] 0 containers: []
	W1212 01:36:39.367485  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:39.367492  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:39.367559  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:39.392871  291455 cri.go:89] found id: ""
	I1212 01:36:39.392896  291455 logs.go:282] 0 containers: []
	W1212 01:36:39.392904  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:39.392911  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:39.392974  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:39.417519  291455 cri.go:89] found id: ""
	I1212 01:36:39.417546  291455 logs.go:282] 0 containers: []
	W1212 01:36:39.417555  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:39.417562  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:39.417621  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:39.441729  291455 cri.go:89] found id: ""
	I1212 01:36:39.441760  291455 logs.go:282] 0 containers: []
	W1212 01:36:39.441769  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:39.441775  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:39.441841  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:39.466118  291455 cri.go:89] found id: ""
	I1212 01:36:39.466147  291455 logs.go:282] 0 containers: []
	W1212 01:36:39.466156  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:39.466163  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:39.466225  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:39.491269  291455 cri.go:89] found id: ""
	I1212 01:36:39.491292  291455 logs.go:282] 0 containers: []
	W1212 01:36:39.491304  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:39.491310  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:39.491375  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:39.515625  291455 cri.go:89] found id: ""
	I1212 01:36:39.515650  291455 logs.go:282] 0 containers: []
	W1212 01:36:39.515659  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:39.515668  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:39.515679  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:39.595337  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:39.595376  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:39.617464  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:39.617500  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:39.698043  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:39.689431    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:39.689924    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:39.691689    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:39.692010    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:39.693641    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:39.689431    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:39.689924    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:39.691689    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:39.692010    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:39.693641    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:39.698068  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:39.698080  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:39.722656  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:39.722692  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:40.784380  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:36:40.845895  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:36:40.846018  291455 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 01:36:42.256252  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:42.269504  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:42.269576  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:42.296285  291455 cri.go:89] found id: ""
	I1212 01:36:42.296314  291455 logs.go:282] 0 containers: []
	W1212 01:36:42.296323  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:42.296330  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:42.296393  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:42.324314  291455 cri.go:89] found id: ""
	I1212 01:36:42.324349  291455 logs.go:282] 0 containers: []
	W1212 01:36:42.324366  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:42.324373  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:42.324448  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:42.353000  291455 cri.go:89] found id: ""
	I1212 01:36:42.353024  291455 logs.go:282] 0 containers: []
	W1212 01:36:42.353033  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:42.353039  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:42.353103  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:42.379029  291455 cri.go:89] found id: ""
	I1212 01:36:42.379057  291455 logs.go:282] 0 containers: []
	W1212 01:36:42.379066  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:42.379073  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:42.379141  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:42.404039  291455 cri.go:89] found id: ""
	I1212 01:36:42.404068  291455 logs.go:282] 0 containers: []
	W1212 01:36:42.404077  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:42.404084  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:42.404150  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:42.429848  291455 cri.go:89] found id: ""
	I1212 01:36:42.429877  291455 logs.go:282] 0 containers: []
	W1212 01:36:42.429887  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:42.429893  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:42.429952  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:42.454022  291455 cri.go:89] found id: ""
	I1212 01:36:42.454049  291455 logs.go:282] 0 containers: []
	W1212 01:36:42.454058  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:42.454065  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:42.454126  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:42.481205  291455 cri.go:89] found id: ""
	I1212 01:36:42.481231  291455 logs.go:282] 0 containers: []
	W1212 01:36:42.481240  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:42.481249  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:42.481260  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:42.511373  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:42.511400  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:42.594053  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:42.594092  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:42.613172  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:42.613201  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:42.688118  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:42.678899    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:42.679678    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:42.681197    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:42.681708    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:42.683477    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:42.678899    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:42.679678    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:42.681197    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:42.681708    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:42.683477    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:42.688142  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:42.688155  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:45.213644  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:45.234582  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:45.234677  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:45.268686  291455 cri.go:89] found id: ""
	I1212 01:36:45.268715  291455 logs.go:282] 0 containers: []
	W1212 01:36:45.268732  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:45.268741  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:45.268827  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:45.297061  291455 cri.go:89] found id: ""
	I1212 01:36:45.297115  291455 logs.go:282] 0 containers: []
	W1212 01:36:45.297132  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:45.297139  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:45.297272  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:45.324030  291455 cri.go:89] found id: ""
	I1212 01:36:45.324063  291455 logs.go:282] 0 containers: []
	W1212 01:36:45.324072  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:45.324078  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:45.324144  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:45.354569  291455 cri.go:89] found id: ""
	I1212 01:36:45.354595  291455 logs.go:282] 0 containers: []
	W1212 01:36:45.354612  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:45.354619  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:45.354697  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:45.380068  291455 cri.go:89] found id: ""
	I1212 01:36:45.380133  291455 logs.go:282] 0 containers: []
	W1212 01:36:45.380160  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:45.380175  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:45.380249  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:45.403554  291455 cri.go:89] found id: ""
	I1212 01:36:45.403620  291455 logs.go:282] 0 containers: []
	W1212 01:36:45.403643  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:45.403664  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:45.403746  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:45.426534  291455 cri.go:89] found id: ""
	I1212 01:36:45.426560  291455 logs.go:282] 0 containers: []
	W1212 01:36:45.426568  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:45.426574  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:45.426637  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:45.455346  291455 cri.go:89] found id: ""
	I1212 01:36:45.455414  291455 logs.go:282] 0 containers: []
	W1212 01:36:45.455438  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:45.455457  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:45.455469  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:45.510486  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:45.510521  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:45.523916  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:45.523944  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:45.642152  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:45.624680    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:45.625385    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:45.635164    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:45.635878    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:45.637755    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:45.624680    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:45.625385    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:45.635164    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:45.635878    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:45.637755    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:45.642173  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:45.642186  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:45.667625  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:45.667661  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:48.197188  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:48.208199  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:48.208272  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:48.236943  291455 cri.go:89] found id: ""
	I1212 01:36:48.236969  291455 logs.go:282] 0 containers: []
	W1212 01:36:48.236977  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:48.236984  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:48.237048  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:48.262444  291455 cri.go:89] found id: ""
	I1212 01:36:48.262468  291455 logs.go:282] 0 containers: []
	W1212 01:36:48.262477  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:48.262483  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:48.262545  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:48.292262  291455 cri.go:89] found id: ""
	I1212 01:36:48.292292  291455 logs.go:282] 0 containers: []
	W1212 01:36:48.292301  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:48.292307  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:48.292370  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:48.318028  291455 cri.go:89] found id: ""
	I1212 01:36:48.318053  291455 logs.go:282] 0 containers: []
	W1212 01:36:48.318063  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:48.318069  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:48.318128  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:48.343500  291455 cri.go:89] found id: ""
	I1212 01:36:48.343524  291455 logs.go:282] 0 containers: []
	W1212 01:36:48.343532  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:48.343539  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:48.343620  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:48.374537  291455 cri.go:89] found id: ""
	I1212 01:36:48.374563  291455 logs.go:282] 0 containers: []
	W1212 01:36:48.374572  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:48.374578  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:48.374657  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:48.399165  291455 cri.go:89] found id: ""
	I1212 01:36:48.399188  291455 logs.go:282] 0 containers: []
	W1212 01:36:48.399197  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:48.399203  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:48.399265  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:48.424429  291455 cri.go:89] found id: ""
	I1212 01:36:48.424452  291455 logs.go:282] 0 containers: []
	W1212 01:36:48.424460  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:48.424469  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:48.424482  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:48.450297  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:48.450336  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:48.477992  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:48.478017  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:48.533513  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:48.533546  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:48.554972  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:48.555078  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:48.639199  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:48.628523    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:48.629323    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:48.630881    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:48.631460    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:48.634979    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:48.628523    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:48.629323    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:48.630881    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:48.631460    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:48.634979    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:51.139443  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:51.152801  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:51.152869  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:51.181036  291455 cri.go:89] found id: ""
	I1212 01:36:51.181060  291455 logs.go:282] 0 containers: []
	W1212 01:36:51.181069  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:51.181076  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:51.181139  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:51.205637  291455 cri.go:89] found id: ""
	I1212 01:36:51.205664  291455 logs.go:282] 0 containers: []
	W1212 01:36:51.205673  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:51.205680  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:51.205744  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:51.230375  291455 cri.go:89] found id: ""
	I1212 01:36:51.230401  291455 logs.go:282] 0 containers: []
	W1212 01:36:51.230410  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:51.230416  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:51.230479  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:51.260594  291455 cri.go:89] found id: ""
	I1212 01:36:51.260620  291455 logs.go:282] 0 containers: []
	W1212 01:36:51.260629  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:51.260636  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:51.260693  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:51.286513  291455 cri.go:89] found id: ""
	I1212 01:36:51.286538  291455 logs.go:282] 0 containers: []
	W1212 01:36:51.286548  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:51.286554  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:51.286613  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:51.320488  291455 cri.go:89] found id: ""
	I1212 01:36:51.320511  291455 logs.go:282] 0 containers: []
	W1212 01:36:51.320519  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:51.320526  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:51.320593  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:51.346751  291455 cri.go:89] found id: ""
	I1212 01:36:51.346773  291455 logs.go:282] 0 containers: []
	W1212 01:36:51.346782  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:51.346788  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:51.346848  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:51.372774  291455 cri.go:89] found id: ""
	I1212 01:36:51.372797  291455 logs.go:282] 0 containers: []
	W1212 01:36:51.372805  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:51.372820  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:51.372832  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:51.397287  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:51.397322  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:51.424395  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:51.424423  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:51.484364  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:51.484400  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:51.497751  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:51.497778  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:51.609432  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:51.593650    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:51.595213    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:51.596974    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:51.601995    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:51.602562    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:51.593650    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:51.595213    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:51.596974    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:51.601995    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:51.602562    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:54.111055  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:54.123333  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:54.123404  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:54.147152  291455 cri.go:89] found id: ""
	I1212 01:36:54.147218  291455 logs.go:282] 0 containers: []
	W1212 01:36:54.147246  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:54.147268  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:54.147370  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:54.172120  291455 cri.go:89] found id: ""
	I1212 01:36:54.172186  291455 logs.go:282] 0 containers: []
	W1212 01:36:54.172212  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:54.172233  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:54.172318  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:54.199177  291455 cri.go:89] found id: ""
	I1212 01:36:54.199242  291455 logs.go:282] 0 containers: []
	W1212 01:36:54.199262  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:54.199269  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:54.199346  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:54.223691  291455 cri.go:89] found id: ""
	I1212 01:36:54.223716  291455 logs.go:282] 0 containers: []
	W1212 01:36:54.223724  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:54.223731  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:54.223796  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:54.248969  291455 cri.go:89] found id: ""
	I1212 01:36:54.248991  291455 logs.go:282] 0 containers: []
	W1212 01:36:54.249000  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:54.249007  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:54.249076  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:54.274124  291455 cri.go:89] found id: ""
	I1212 01:36:54.274149  291455 logs.go:282] 0 containers: []
	W1212 01:36:54.274158  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:54.274165  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:54.274223  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:54.299049  291455 cri.go:89] found id: ""
	I1212 01:36:54.299071  291455 logs.go:282] 0 containers: []
	W1212 01:36:54.299079  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:54.299085  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:54.299142  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:54.323692  291455 cri.go:89] found id: ""
	I1212 01:36:54.323727  291455 logs.go:282] 0 containers: []
	W1212 01:36:54.323736  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:54.323745  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:54.323757  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:54.337075  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:54.337102  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:54.405905  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:54.396717    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:54.397409    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:54.399032    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:54.399536    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:54.401700    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:54.396717    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:54.397409    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:54.399032    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:54.399536    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:54.401700    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:54.405927  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:54.405938  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:54.432446  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:54.432489  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:54.461143  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:54.461170  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:57.017892  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:57.031680  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:57.031754  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:57.058619  291455 cri.go:89] found id: ""
	I1212 01:36:57.058644  291455 logs.go:282] 0 containers: []
	W1212 01:36:57.058661  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:57.058670  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:57.058744  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:57.082470  291455 cri.go:89] found id: ""
	I1212 01:36:57.082496  291455 logs.go:282] 0 containers: []
	W1212 01:36:57.082505  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:57.082511  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:57.082569  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:57.107129  291455 cri.go:89] found id: ""
	I1212 01:36:57.107152  291455 logs.go:282] 0 containers: []
	W1212 01:36:57.107161  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:57.107174  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:57.107235  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:57.131240  291455 cri.go:89] found id: ""
	I1212 01:36:57.131264  291455 logs.go:282] 0 containers: []
	W1212 01:36:57.131272  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:57.131282  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:57.131339  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:57.161702  291455 cri.go:89] found id: ""
	I1212 01:36:57.161728  291455 logs.go:282] 0 containers: []
	W1212 01:36:57.161737  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:57.161743  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:57.161800  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:57.186568  291455 cri.go:89] found id: ""
	I1212 01:36:57.186592  291455 logs.go:282] 0 containers: []
	W1212 01:36:57.186601  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:57.186607  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:57.186724  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:57.211286  291455 cri.go:89] found id: ""
	I1212 01:36:57.211310  291455 logs.go:282] 0 containers: []
	W1212 01:36:57.211319  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:57.211325  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:57.211382  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:57.236370  291455 cri.go:89] found id: ""
	I1212 01:36:57.236394  291455 logs.go:282] 0 containers: []
	W1212 01:36:57.236403  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:57.236412  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:57.236423  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:57.292504  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:57.292539  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:57.306287  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:57.306314  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:57.369836  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:57.361540    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:57.362207    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:57.363914    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:57.364465    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:57.366079    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:57.361540    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:57.362207    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:57.363914    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:57.364465    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:57.366079    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:57.369856  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:57.369870  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:57.395588  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:57.395625  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:59.923774  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:59.935843  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:59.935936  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:59.961362  291455 cri.go:89] found id: ""
	I1212 01:36:59.961383  291455 logs.go:282] 0 containers: []
	W1212 01:36:59.961392  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:59.961398  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:59.961453  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:59.987418  291455 cri.go:89] found id: ""
	I1212 01:36:59.987448  291455 logs.go:282] 0 containers: []
	W1212 01:36:59.987458  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:59.987463  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:59.987521  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:00.083321  291455 cri.go:89] found id: ""
	I1212 01:37:00.083352  291455 logs.go:282] 0 containers: []
	W1212 01:37:00.083362  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:00.083369  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:00.083456  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:00.200170  291455 cri.go:89] found id: ""
	I1212 01:37:00.200535  291455 logs.go:282] 0 containers: []
	W1212 01:37:00.200580  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:00.200686  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:00.201034  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:00.291145  291455 cri.go:89] found id: ""
	I1212 01:37:00.291235  291455 logs.go:282] 0 containers: []
	W1212 01:37:00.291284  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:00.291318  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:00.291414  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:00.393558  291455 cri.go:89] found id: ""
	I1212 01:37:00.393606  291455 logs.go:282] 0 containers: []
	W1212 01:37:00.393618  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:00.393626  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:00.393706  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:00.423985  291455 cri.go:89] found id: ""
	I1212 01:37:00.424023  291455 logs.go:282] 0 containers: []
	W1212 01:37:00.424035  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:00.424041  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:00.424117  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:00.451670  291455 cri.go:89] found id: ""
	I1212 01:37:00.451695  291455 logs.go:282] 0 containers: []
	W1212 01:37:00.451705  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:00.451715  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:00.451728  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:00.509577  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:00.509614  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:00.525099  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:00.525133  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:00.635419  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:00.627409    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:00.628095    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:00.629751    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:00.630057    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:00.631588    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:00.627409    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:00.628095    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:00.629751    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:00.630057    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:00.631588    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:00.635455  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:00.635468  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:00.663944  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:00.663984  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:03.194688  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:03.205352  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:03.205425  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:03.233099  291455 cri.go:89] found id: ""
	I1212 01:37:03.233131  291455 logs.go:282] 0 containers: []
	W1212 01:37:03.233140  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:03.233146  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:03.233217  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:03.257676  291455 cri.go:89] found id: ""
	I1212 01:37:03.257700  291455 logs.go:282] 0 containers: []
	W1212 01:37:03.257710  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:03.257716  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:03.257802  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:03.282622  291455 cri.go:89] found id: ""
	I1212 01:37:03.282696  291455 logs.go:282] 0 containers: []
	W1212 01:37:03.282719  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:03.282739  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:03.282834  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:03.309162  291455 cri.go:89] found id: ""
	I1212 01:37:03.309190  291455 logs.go:282] 0 containers: []
	W1212 01:37:03.309199  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:03.309205  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:03.309265  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:03.334284  291455 cri.go:89] found id: ""
	I1212 01:37:03.334318  291455 logs.go:282] 0 containers: []
	W1212 01:37:03.334327  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:03.334334  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:03.334401  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:03.361255  291455 cri.go:89] found id: ""
	I1212 01:37:03.361281  291455 logs.go:282] 0 containers: []
	W1212 01:37:03.361290  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:03.361296  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:03.361376  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:03.386372  291455 cri.go:89] found id: ""
	I1212 01:37:03.386406  291455 logs.go:282] 0 containers: []
	W1212 01:37:03.386415  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:03.386421  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:03.386490  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:03.412127  291455 cri.go:89] found id: ""
	I1212 01:37:03.412151  291455 logs.go:282] 0 containers: []
	W1212 01:37:03.412160  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:03.412170  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:03.412181  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:03.467933  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:03.467980  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:03.481636  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:03.481663  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:03.565451  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:03.551611    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:03.552450    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:03.553999    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:03.554567    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:03.556109    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:03.551611    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:03.552450    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:03.553999    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:03.554567    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:03.556109    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:03.565476  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:03.565548  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:03.614744  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:03.614783  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:06.159160  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:06.169841  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:06.169916  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:06.196496  291455 cri.go:89] found id: ""
	I1212 01:37:06.196521  291455 logs.go:282] 0 containers: []
	W1212 01:37:06.196529  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:06.196536  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:06.196594  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:06.229404  291455 cri.go:89] found id: ""
	I1212 01:37:06.229429  291455 logs.go:282] 0 containers: []
	W1212 01:37:06.229438  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:06.229444  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:06.229505  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:06.254056  291455 cri.go:89] found id: ""
	I1212 01:37:06.254081  291455 logs.go:282] 0 containers: []
	W1212 01:37:06.254089  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:06.254095  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:06.254154  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:06.278424  291455 cri.go:89] found id: ""
	I1212 01:37:06.278453  291455 logs.go:282] 0 containers: []
	W1212 01:37:06.278462  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:06.278469  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:06.278527  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:06.302517  291455 cri.go:89] found id: ""
	I1212 01:37:06.302545  291455 logs.go:282] 0 containers: []
	W1212 01:37:06.302554  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:06.302560  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:06.302617  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:06.328634  291455 cri.go:89] found id: ""
	I1212 01:37:06.328657  291455 logs.go:282] 0 containers: []
	W1212 01:37:06.328665  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:06.328671  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:06.328728  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:06.352026  291455 cri.go:89] found id: ""
	I1212 01:37:06.352099  291455 logs.go:282] 0 containers: []
	W1212 01:37:06.352115  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:06.352125  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:06.352199  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:06.376075  291455 cri.go:89] found id: ""
	I1212 01:37:06.376101  291455 logs.go:282] 0 containers: []
	W1212 01:37:06.376110  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:06.376119  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:06.376130  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:06.400451  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:06.400481  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:06.428356  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:06.428385  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:06.484230  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:06.484267  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:06.498047  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:06.498074  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:06.610705  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:06.593235    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:06.594305    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:06.599655    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:06.603092    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:06.603422    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:06.593235    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:06.594305    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:06.599655    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:06.603092    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:06.603422    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:09.111534  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:09.121786  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:09.121855  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:09.148241  291455 cri.go:89] found id: ""
	I1212 01:37:09.148267  291455 logs.go:282] 0 containers: []
	W1212 01:37:09.148275  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:09.148282  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:09.148341  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:09.172742  291455 cri.go:89] found id: ""
	I1212 01:37:09.172764  291455 logs.go:282] 0 containers: []
	W1212 01:37:09.172773  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:09.172779  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:09.172835  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:09.197560  291455 cri.go:89] found id: ""
	I1212 01:37:09.197586  291455 logs.go:282] 0 containers: []
	W1212 01:37:09.197595  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:09.197601  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:09.197673  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:09.222352  291455 cri.go:89] found id: ""
	I1212 01:37:09.222377  291455 logs.go:282] 0 containers: []
	W1212 01:37:09.222386  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:09.222392  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:09.222450  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:09.246770  291455 cri.go:89] found id: ""
	I1212 01:37:09.246794  291455 logs.go:282] 0 containers: []
	W1212 01:37:09.246802  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:09.246809  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:09.246875  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:09.273237  291455 cri.go:89] found id: ""
	I1212 01:37:09.273260  291455 logs.go:282] 0 containers: []
	W1212 01:37:09.273268  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:09.273275  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:09.273342  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:09.298382  291455 cri.go:89] found id: ""
	I1212 01:37:09.298405  291455 logs.go:282] 0 containers: []
	W1212 01:37:09.298414  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:09.298421  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:09.298479  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:09.326366  291455 cri.go:89] found id: ""
	I1212 01:37:09.326388  291455 logs.go:282] 0 containers: []
	W1212 01:37:09.326396  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:09.326405  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:09.326416  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:09.339892  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:09.339920  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:09.408533  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:09.399583    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:09.400465    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:09.402243    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:09.402860    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:09.404361    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:09.399583    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:09.400465    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:09.402243    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:09.402860    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:09.404361    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:09.408555  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:09.408568  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:09.434113  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:09.434149  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:09.469040  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:09.469065  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:12.025102  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:12.036649  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:12.036722  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:12.064882  291455 cri.go:89] found id: ""
	I1212 01:37:12.064905  291455 logs.go:282] 0 containers: []
	W1212 01:37:12.064913  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:12.064919  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:12.064979  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:12.090328  291455 cri.go:89] found id: ""
	I1212 01:37:12.090354  291455 logs.go:282] 0 containers: []
	W1212 01:37:12.090362  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:12.090369  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:12.090429  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:12.115640  291455 cri.go:89] found id: ""
	I1212 01:37:12.115665  291455 logs.go:282] 0 containers: []
	W1212 01:37:12.115674  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:12.115680  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:12.115741  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:12.140726  291455 cri.go:89] found id: ""
	I1212 01:37:12.140752  291455 logs.go:282] 0 containers: []
	W1212 01:37:12.140773  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:12.140810  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:12.140900  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:12.165182  291455 cri.go:89] found id: ""
	I1212 01:37:12.165208  291455 logs.go:282] 0 containers: []
	W1212 01:37:12.165216  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:12.165223  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:12.165282  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:12.189365  291455 cri.go:89] found id: ""
	I1212 01:37:12.189389  291455 logs.go:282] 0 containers: []
	W1212 01:37:12.189398  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:12.189405  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:12.189463  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:12.214048  291455 cri.go:89] found id: ""
	I1212 01:37:12.214073  291455 logs.go:282] 0 containers: []
	W1212 01:37:12.214082  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:12.214088  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:12.214148  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:12.240794  291455 cri.go:89] found id: ""
	I1212 01:37:12.240821  291455 logs.go:282] 0 containers: []
	W1212 01:37:12.240830  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:12.240840  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:12.240851  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:12.300894  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:12.300936  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:12.314783  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:12.314817  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:12.382362  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:12.373621    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:12.374371    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:12.376069    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:12.376636    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:12.378249    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:12.373621    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:12.374371    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:12.376069    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:12.376636    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:12.378249    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:12.382385  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:12.382397  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:12.408884  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:12.408921  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:14.444251  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:37:14.509220  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:37:14.509386  291455 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 01:37:14.942929  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:14.953301  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:14.953373  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:14.977865  291455 cri.go:89] found id: ""
	I1212 01:37:14.977933  291455 logs.go:282] 0 containers: []
	W1212 01:37:14.977947  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:14.977954  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:14.978019  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:15.012296  291455 cri.go:89] found id: ""
	I1212 01:37:15.012325  291455 logs.go:282] 0 containers: []
	W1212 01:37:15.012335  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:15.012342  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:15.012414  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:15.044602  291455 cri.go:89] found id: ""
	I1212 01:37:15.044629  291455 logs.go:282] 0 containers: []
	W1212 01:37:15.044638  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:15.044644  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:15.044705  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:15.072008  291455 cri.go:89] found id: ""
	I1212 01:37:15.072035  291455 logs.go:282] 0 containers: []
	W1212 01:37:15.072043  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:15.072049  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:15.072112  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:15.098264  291455 cri.go:89] found id: ""
	I1212 01:37:15.098293  291455 logs.go:282] 0 containers: []
	W1212 01:37:15.098308  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:15.098316  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:15.098390  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:15.124176  291455 cri.go:89] found id: ""
	I1212 01:37:15.124203  291455 logs.go:282] 0 containers: []
	W1212 01:37:15.124212  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:15.124218  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:15.124278  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:15.148763  291455 cri.go:89] found id: ""
	I1212 01:37:15.148788  291455 logs.go:282] 0 containers: []
	W1212 01:37:15.148797  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:15.148803  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:15.148880  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:15.173843  291455 cri.go:89] found id: ""
	I1212 01:37:15.173870  291455 logs.go:282] 0 containers: []
	W1212 01:37:15.173879  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:15.173889  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:15.173901  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:15.203728  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:15.203757  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:15.259019  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:15.259053  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:15.272480  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:15.272509  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:15.337558  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:15.329071    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:15.329763    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:15.331497    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:15.332089    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:15.333695    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:15.329071    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:15.329763    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:15.331497    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:15.332089    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:15.333695    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:15.337580  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:15.337592  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:17.027133  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:37:17.109229  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:37:17.109319  291455 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 01:37:17.112386  291455 out.go:179] * Enabled addons: 
	I1212 01:37:17.115266  291455 addons.go:530] duration metric: took 1m58.649036473s for enable addons: enabled=[]
	I1212 01:37:17.864277  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:17.875687  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:17.875762  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:17.900504  291455 cri.go:89] found id: ""
	I1212 01:37:17.900527  291455 logs.go:282] 0 containers: []
	W1212 01:37:17.900536  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:17.900542  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:17.900626  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:17.925113  291455 cri.go:89] found id: ""
	I1212 01:37:17.925136  291455 logs.go:282] 0 containers: []
	W1212 01:37:17.925145  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:17.925151  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:17.925238  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:17.950585  291455 cri.go:89] found id: ""
	I1212 01:37:17.950611  291455 logs.go:282] 0 containers: []
	W1212 01:37:17.950620  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:17.950626  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:17.950687  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:17.977787  291455 cri.go:89] found id: ""
	I1212 01:37:17.977813  291455 logs.go:282] 0 containers: []
	W1212 01:37:17.977822  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:17.977828  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:17.977888  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:18.006885  291455 cri.go:89] found id: ""
	I1212 01:37:18.006967  291455 logs.go:282] 0 containers: []
	W1212 01:37:18.007019  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:18.007043  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:18.007118  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:18.033137  291455 cri.go:89] found id: ""
	I1212 01:37:18.033161  291455 logs.go:282] 0 containers: []
	W1212 01:37:18.033170  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:18.033176  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:18.033238  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:18.058968  291455 cri.go:89] found id: ""
	I1212 01:37:18.059009  291455 logs.go:282] 0 containers: []
	W1212 01:37:18.059019  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:18.059025  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:18.059087  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:18.084927  291455 cri.go:89] found id: ""
	I1212 01:37:18.084961  291455 logs.go:282] 0 containers: []
	W1212 01:37:18.084971  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:18.084981  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:18.084994  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:18.153070  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:18.145061    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:18.145891    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:18.147207    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:18.147819    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:18.149000    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:18.145061    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:18.145891    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:18.147207    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:18.147819    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:18.149000    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:18.153101  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:18.153113  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:18.178193  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:18.178227  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:18.205844  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:18.205874  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:18.261619  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:18.261657  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:20.775910  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:20.797119  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:20.797192  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:20.870519  291455 cri.go:89] found id: ""
	I1212 01:37:20.870556  291455 logs.go:282] 0 containers: []
	W1212 01:37:20.870566  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:20.870573  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:20.870642  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:20.895021  291455 cri.go:89] found id: ""
	I1212 01:37:20.895044  291455 logs.go:282] 0 containers: []
	W1212 01:37:20.895053  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:20.895059  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:20.895119  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:20.918242  291455 cri.go:89] found id: ""
	I1212 01:37:20.918270  291455 logs.go:282] 0 containers: []
	W1212 01:37:20.918279  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:20.918286  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:20.918340  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:20.942755  291455 cri.go:89] found id: ""
	I1212 01:37:20.942781  291455 logs.go:282] 0 containers: []
	W1212 01:37:20.942790  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:20.942796  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:20.942855  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:20.966487  291455 cri.go:89] found id: ""
	I1212 01:37:20.966551  291455 logs.go:282] 0 containers: []
	W1212 01:37:20.966574  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:20.966595  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:20.966680  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:20.992848  291455 cri.go:89] found id: ""
	I1212 01:37:20.992922  291455 logs.go:282] 0 containers: []
	W1212 01:37:20.992945  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:20.992959  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:20.993035  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:21.025558  291455 cri.go:89] found id: ""
	I1212 01:37:21.025587  291455 logs.go:282] 0 containers: []
	W1212 01:37:21.025596  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:21.025602  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:21.025663  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:21.050967  291455 cri.go:89] found id: ""
	I1212 01:37:21.051023  291455 logs.go:282] 0 containers: []
	W1212 01:37:21.051032  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:21.051041  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:21.051057  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:21.077368  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:21.077396  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:21.133503  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:21.133538  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:21.147218  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:21.147245  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:21.209763  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:21.201479    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:21.202138    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:21.203803    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:21.204409    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:21.205960    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:21.201479    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:21.202138    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:21.203803    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:21.204409    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:21.205960    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:21.209786  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:21.209799  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:23.737746  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:23.747983  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:23.748051  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:23.772289  291455 cri.go:89] found id: ""
	I1212 01:37:23.772315  291455 logs.go:282] 0 containers: []
	W1212 01:37:23.772333  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:23.772341  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:23.772420  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:23.848280  291455 cri.go:89] found id: ""
	I1212 01:37:23.848306  291455 logs.go:282] 0 containers: []
	W1212 01:37:23.848315  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:23.848322  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:23.848386  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:23.884675  291455 cri.go:89] found id: ""
	I1212 01:37:23.884700  291455 logs.go:282] 0 containers: []
	W1212 01:37:23.884709  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:23.884715  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:23.884777  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:23.914530  291455 cri.go:89] found id: ""
	I1212 01:37:23.914553  291455 logs.go:282] 0 containers: []
	W1212 01:37:23.914561  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:23.914569  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:23.914626  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:23.940203  291455 cri.go:89] found id: ""
	I1212 01:37:23.940275  291455 logs.go:282] 0 containers: []
	W1212 01:37:23.940292  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:23.940299  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:23.940364  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:23.968920  291455 cri.go:89] found id: ""
	I1212 01:37:23.968944  291455 logs.go:282] 0 containers: []
	W1212 01:37:23.968952  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:23.968959  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:23.969016  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:23.993883  291455 cri.go:89] found id: ""
	I1212 01:37:23.993910  291455 logs.go:282] 0 containers: []
	W1212 01:37:23.993919  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:23.993925  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:23.993985  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:24.019876  291455 cri.go:89] found id: ""
	I1212 01:37:24.019901  291455 logs.go:282] 0 containers: []
	W1212 01:37:24.019909  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:24.019922  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:24.019935  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:24.052560  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:24.052586  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:24.107812  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:24.107847  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:24.121870  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:24.121902  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:24.193432  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:24.184434    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:24.184974    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:24.185943    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:24.187426    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:24.187845    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:24.184434    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:24.184974    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:24.185943    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:24.187426    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:24.187845    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:24.193458  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:24.193471  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:26.720901  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:26.732114  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:26.732194  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:26.759421  291455 cri.go:89] found id: ""
	I1212 01:37:26.759443  291455 logs.go:282] 0 containers: []
	W1212 01:37:26.759451  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:26.759458  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:26.759523  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:26.801227  291455 cri.go:89] found id: ""
	I1212 01:37:26.801252  291455 logs.go:282] 0 containers: []
	W1212 01:37:26.801261  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:26.801290  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:26.801371  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:26.836143  291455 cri.go:89] found id: ""
	I1212 01:37:26.836168  291455 logs.go:282] 0 containers: []
	W1212 01:37:26.836178  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:26.836184  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:26.836276  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:26.880334  291455 cri.go:89] found id: ""
	I1212 01:37:26.880373  291455 logs.go:282] 0 containers: []
	W1212 01:37:26.880382  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:26.880388  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:26.880477  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:26.915704  291455 cri.go:89] found id: ""
	I1212 01:37:26.915769  291455 logs.go:282] 0 containers: []
	W1212 01:37:26.915786  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:26.915793  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:26.915864  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:26.943219  291455 cri.go:89] found id: ""
	I1212 01:37:26.943252  291455 logs.go:282] 0 containers: []
	W1212 01:37:26.943262  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:26.943269  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:26.943350  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:26.968790  291455 cri.go:89] found id: ""
	I1212 01:37:26.968867  291455 logs.go:282] 0 containers: []
	W1212 01:37:26.968882  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:26.968889  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:26.968946  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:26.993867  291455 cri.go:89] found id: ""
	I1212 01:37:26.993892  291455 logs.go:282] 0 containers: []
	W1212 01:37:26.993908  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:26.993918  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:26.993929  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:27.025483  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:27.025547  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:27.081672  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:27.081704  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:27.095698  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:27.095724  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:27.161161  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:27.151369    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:27.152034    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:27.153696    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:27.156078    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:27.157312    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:27.151369    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:27.152034    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:27.153696    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:27.156078    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:27.157312    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:27.161189  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:27.161202  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:29.686768  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:29.699055  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:29.699131  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:29.725025  291455 cri.go:89] found id: ""
	I1212 01:37:29.725050  291455 logs.go:282] 0 containers: []
	W1212 01:37:29.725059  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:29.725065  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:29.725140  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:29.749378  291455 cri.go:89] found id: ""
	I1212 01:37:29.749401  291455 logs.go:282] 0 containers: []
	W1212 01:37:29.749410  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:29.749416  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:29.749481  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:29.773953  291455 cri.go:89] found id: ""
	I1212 01:37:29.773978  291455 logs.go:282] 0 containers: []
	W1212 01:37:29.773987  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:29.773993  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:29.774052  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:29.831695  291455 cri.go:89] found id: ""
	I1212 01:37:29.831723  291455 logs.go:282] 0 containers: []
	W1212 01:37:29.831732  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:29.831738  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:29.831794  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:29.881376  291455 cri.go:89] found id: ""
	I1212 01:37:29.881401  291455 logs.go:282] 0 containers: []
	W1212 01:37:29.881412  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:29.881418  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:29.881477  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:29.905463  291455 cri.go:89] found id: ""
	I1212 01:37:29.905497  291455 logs.go:282] 0 containers: []
	W1212 01:37:29.905506  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:29.905530  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:29.905618  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:29.929393  291455 cri.go:89] found id: ""
	I1212 01:37:29.929427  291455 logs.go:282] 0 containers: []
	W1212 01:37:29.929436  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:29.929442  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:29.929507  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:29.956794  291455 cri.go:89] found id: ""
	I1212 01:37:29.956820  291455 logs.go:282] 0 containers: []
	W1212 01:37:29.956829  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:29.956839  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:29.956850  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:29.981845  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:29.981878  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:30.037712  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:30.037751  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:30.096286  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:30.096320  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:30.111120  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:30.111160  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:30.180653  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:30.171653    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:30.172384    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:30.174167    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:30.174765    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:30.176527    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:30.171653    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:30.172384    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:30.174167    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:30.174765    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:30.176527    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:32.681768  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:32.693283  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:32.693354  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:32.720606  291455 cri.go:89] found id: ""
	I1212 01:37:32.720629  291455 logs.go:282] 0 containers: []
	W1212 01:37:32.720638  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:32.720644  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:32.720703  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:32.747145  291455 cri.go:89] found id: ""
	I1212 01:37:32.747167  291455 logs.go:282] 0 containers: []
	W1212 01:37:32.747177  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:32.747185  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:32.747243  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:32.772037  291455 cri.go:89] found id: ""
	I1212 01:37:32.772061  291455 logs.go:282] 0 containers: []
	W1212 01:37:32.772070  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:32.772076  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:32.772134  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:32.862885  291455 cri.go:89] found id: ""
	I1212 01:37:32.862910  291455 logs.go:282] 0 containers: []
	W1212 01:37:32.862919  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:32.862925  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:32.862983  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:32.888016  291455 cri.go:89] found id: ""
	I1212 01:37:32.888038  291455 logs.go:282] 0 containers: []
	W1212 01:37:32.888049  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:32.888055  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:32.888115  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:32.912450  291455 cri.go:89] found id: ""
	I1212 01:37:32.912472  291455 logs.go:282] 0 containers: []
	W1212 01:37:32.912481  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:32.912487  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:32.912544  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:32.935759  291455 cri.go:89] found id: ""
	I1212 01:37:32.935781  291455 logs.go:282] 0 containers: []
	W1212 01:37:32.935790  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:32.935797  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:32.935855  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:32.963827  291455 cri.go:89] found id: ""
	I1212 01:37:32.963850  291455 logs.go:282] 0 containers: []
	W1212 01:37:32.963858  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:32.963869  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:32.963880  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:32.988758  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:32.988788  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:33.021942  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:33.021973  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:33.078907  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:33.078940  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:33.094242  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:33.094270  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:33.157981  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:33.149433    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:33.150328    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:33.151907    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:33.152360    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:33.153844    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:33.149433    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:33.150328    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:33.151907    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:33.152360    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:33.153844    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:35.659737  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:35.672022  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:35.672098  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:35.701308  291455 cri.go:89] found id: ""
	I1212 01:37:35.701334  291455 logs.go:282] 0 containers: []
	W1212 01:37:35.701343  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:35.701349  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:35.701408  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:35.726385  291455 cri.go:89] found id: ""
	I1212 01:37:35.726409  291455 logs.go:282] 0 containers: []
	W1212 01:37:35.726418  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:35.726424  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:35.726482  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:35.751557  291455 cri.go:89] found id: ""
	I1212 01:37:35.751593  291455 logs.go:282] 0 containers: []
	W1212 01:37:35.751604  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:35.751610  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:35.751679  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:35.776892  291455 cri.go:89] found id: ""
	I1212 01:37:35.776956  291455 logs.go:282] 0 containers: []
	W1212 01:37:35.776971  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:35.776982  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:35.777044  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:35.824076  291455 cri.go:89] found id: ""
	I1212 01:37:35.824107  291455 logs.go:282] 0 containers: []
	W1212 01:37:35.824116  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:35.824122  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:35.824179  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:35.880084  291455 cri.go:89] found id: ""
	I1212 01:37:35.880107  291455 logs.go:282] 0 containers: []
	W1212 01:37:35.880115  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:35.880122  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:35.880192  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:35.907066  291455 cri.go:89] found id: ""
	I1212 01:37:35.907091  291455 logs.go:282] 0 containers: []
	W1212 01:37:35.907099  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:35.907105  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:35.907166  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:35.936636  291455 cri.go:89] found id: ""
	I1212 01:37:35.936713  291455 logs.go:282] 0 containers: []
	W1212 01:37:35.936729  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:35.936739  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:35.936750  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:35.993085  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:35.993119  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:36.007767  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:36.007856  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:36.076959  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:36.068314    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:36.068888    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:36.070632    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:36.071390    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:36.072929    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:36.068314    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:36.068888    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:36.070632    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:36.071390    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:36.072929    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:36.076984  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:36.076997  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:36.103429  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:36.103463  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:38.632890  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:38.643831  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:38.643909  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:38.671085  291455 cri.go:89] found id: ""
	I1212 01:37:38.671108  291455 logs.go:282] 0 containers: []
	W1212 01:37:38.671116  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:38.671122  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:38.671182  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:38.694933  291455 cri.go:89] found id: ""
	I1212 01:37:38.694958  291455 logs.go:282] 0 containers: []
	W1212 01:37:38.694966  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:38.694972  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:38.695070  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:38.723033  291455 cri.go:89] found id: ""
	I1212 01:37:38.723060  291455 logs.go:282] 0 containers: []
	W1212 01:37:38.723069  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:38.723075  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:38.723135  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:38.748068  291455 cri.go:89] found id: ""
	I1212 01:37:38.748093  291455 logs.go:282] 0 containers: []
	W1212 01:37:38.748102  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:38.748109  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:38.748169  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:38.778336  291455 cri.go:89] found id: ""
	I1212 01:37:38.778362  291455 logs.go:282] 0 containers: []
	W1212 01:37:38.778371  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:38.778377  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:38.778438  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:38.824425  291455 cri.go:89] found id: ""
	I1212 01:37:38.824452  291455 logs.go:282] 0 containers: []
	W1212 01:37:38.824461  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:38.824468  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:38.824526  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:38.869581  291455 cri.go:89] found id: ""
	I1212 01:37:38.869607  291455 logs.go:282] 0 containers: []
	W1212 01:37:38.869616  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:38.869623  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:38.869684  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:38.898375  291455 cri.go:89] found id: ""
	I1212 01:37:38.898401  291455 logs.go:282] 0 containers: []
	W1212 01:37:38.898411  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:38.898420  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:38.898431  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:38.924559  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:38.924594  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:38.954848  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:38.954884  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:39.010528  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:39.010564  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:39.024383  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:39.024412  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:39.090716  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:39.082311    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:39.082890    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:39.084642    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:39.085084    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:39.086585    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:39.082311    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:39.082890    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:39.084642    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:39.085084    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:39.086585    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:41.591539  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:41.602064  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:41.602135  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:41.626512  291455 cri.go:89] found id: ""
	I1212 01:37:41.626584  291455 logs.go:282] 0 containers: []
	W1212 01:37:41.626609  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:41.626629  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:41.626713  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:41.651218  291455 cri.go:89] found id: ""
	I1212 01:37:41.651294  291455 logs.go:282] 0 containers: []
	W1212 01:37:41.651317  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:41.651339  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:41.651429  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:41.676032  291455 cri.go:89] found id: ""
	I1212 01:37:41.676055  291455 logs.go:282] 0 containers: []
	W1212 01:37:41.676064  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:41.676070  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:41.676144  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:41.700472  291455 cri.go:89] found id: ""
	I1212 01:37:41.700495  291455 logs.go:282] 0 containers: []
	W1212 01:37:41.700509  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:41.700516  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:41.700573  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:41.728292  291455 cri.go:89] found id: ""
	I1212 01:37:41.728317  291455 logs.go:282] 0 containers: []
	W1212 01:37:41.728326  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:41.728332  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:41.728413  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:41.752458  291455 cri.go:89] found id: ""
	I1212 01:37:41.752496  291455 logs.go:282] 0 containers: []
	W1212 01:37:41.752508  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:41.752515  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:41.752687  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:41.778677  291455 cri.go:89] found id: ""
	I1212 01:37:41.778703  291455 logs.go:282] 0 containers: []
	W1212 01:37:41.778711  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:41.778717  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:41.778802  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:41.831103  291455 cri.go:89] found id: ""
	I1212 01:37:41.831129  291455 logs.go:282] 0 containers: []
	W1212 01:37:41.831138  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:41.831147  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:41.831158  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:41.922931  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:41.914201    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:41.914946    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:41.916560    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:41.917145    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:41.918787    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:41.914201    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:41.914946    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:41.916560    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:41.917145    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:41.918787    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:41.922954  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:41.922966  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:41.948574  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:41.948606  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:41.976883  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:41.976910  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:42.031740  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:42.031774  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:44.547156  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:44.557779  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:44.557852  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:44.585516  291455 cri.go:89] found id: ""
	I1212 01:37:44.585539  291455 logs.go:282] 0 containers: []
	W1212 01:37:44.585547  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:44.585554  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:44.585614  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:44.610080  291455 cri.go:89] found id: ""
	I1212 01:37:44.610146  291455 logs.go:282] 0 containers: []
	W1212 01:37:44.610170  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:44.610188  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:44.610282  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:44.634333  291455 cri.go:89] found id: ""
	I1212 01:37:44.634403  291455 logs.go:282] 0 containers: []
	W1212 01:37:44.634428  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:44.634449  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:44.634538  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:44.659415  291455 cri.go:89] found id: ""
	I1212 01:37:44.659441  291455 logs.go:282] 0 containers: []
	W1212 01:37:44.659450  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:44.659457  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:44.659518  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:44.688713  291455 cri.go:89] found id: ""
	I1212 01:37:44.688738  291455 logs.go:282] 0 containers: []
	W1212 01:37:44.688747  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:44.688753  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:44.688813  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:44.713219  291455 cri.go:89] found id: ""
	I1212 01:37:44.713245  291455 logs.go:282] 0 containers: []
	W1212 01:37:44.713262  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:44.713270  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:44.713334  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:44.736447  291455 cri.go:89] found id: ""
	I1212 01:37:44.736472  291455 logs.go:282] 0 containers: []
	W1212 01:37:44.736480  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:44.736486  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:44.736562  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:44.762258  291455 cri.go:89] found id: ""
	I1212 01:37:44.762283  291455 logs.go:282] 0 containers: []
	W1212 01:37:44.762292  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:44.762324  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:44.762341  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:44.839027  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:44.839065  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:44.856616  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:44.856643  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:44.936247  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:44.928242    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:44.928784    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:44.930267    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:44.930803    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:44.932347    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:44.928242    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:44.928784    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:44.930267    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:44.930803    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:44.932347    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:44.936278  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:44.936291  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:44.961626  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:44.961659  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:47.490976  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:47.501776  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:47.501852  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:47.532240  291455 cri.go:89] found id: ""
	I1212 01:37:47.532263  291455 logs.go:282] 0 containers: []
	W1212 01:37:47.532271  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:47.532276  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:47.532336  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:47.556453  291455 cri.go:89] found id: ""
	I1212 01:37:47.556475  291455 logs.go:282] 0 containers: []
	W1212 01:37:47.556484  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:47.556490  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:47.556551  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:47.580605  291455 cri.go:89] found id: ""
	I1212 01:37:47.580628  291455 logs.go:282] 0 containers: []
	W1212 01:37:47.580637  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:47.580643  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:47.580709  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:47.605106  291455 cri.go:89] found id: ""
	I1212 01:37:47.605130  291455 logs.go:282] 0 containers: []
	W1212 01:37:47.605139  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:47.605145  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:47.605224  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:47.630587  291455 cri.go:89] found id: ""
	I1212 01:37:47.630613  291455 logs.go:282] 0 containers: []
	W1212 01:37:47.630622  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:47.630629  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:47.630733  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:47.656391  291455 cri.go:89] found id: ""
	I1212 01:37:47.656416  291455 logs.go:282] 0 containers: []
	W1212 01:37:47.656424  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:47.656431  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:47.656489  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:47.680787  291455 cri.go:89] found id: ""
	I1212 01:37:47.680817  291455 logs.go:282] 0 containers: []
	W1212 01:37:47.680826  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:47.680832  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:47.680913  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:47.706371  291455 cri.go:89] found id: ""
	I1212 01:37:47.706396  291455 logs.go:282] 0 containers: []
	W1212 01:37:47.706405  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:47.706414  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:47.706458  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:47.763648  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:47.763687  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:47.777355  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:47.777383  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:47.899204  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:47.891161    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:47.891855    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:47.893228    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:47.893728    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:47.895403    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:47.891161    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:47.891855    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:47.893228    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:47.893728    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:47.895403    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:47.899226  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:47.899238  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:47.924220  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:47.924256  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:50.458301  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:50.468856  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:50.468926  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:50.493349  291455 cri.go:89] found id: ""
	I1212 01:37:50.493374  291455 logs.go:282] 0 containers: []
	W1212 01:37:50.493382  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:50.493388  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:50.493445  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:50.517926  291455 cri.go:89] found id: ""
	I1212 01:37:50.517951  291455 logs.go:282] 0 containers: []
	W1212 01:37:50.517960  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:50.517966  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:50.518026  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:50.546779  291455 cri.go:89] found id: ""
	I1212 01:37:50.546805  291455 logs.go:282] 0 containers: []
	W1212 01:37:50.546814  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:50.546819  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:50.546877  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:50.572059  291455 cri.go:89] found id: ""
	I1212 01:37:50.572086  291455 logs.go:282] 0 containers: []
	W1212 01:37:50.572102  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:50.572110  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:50.572173  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:50.596562  291455 cri.go:89] found id: ""
	I1212 01:37:50.596585  291455 logs.go:282] 0 containers: []
	W1212 01:37:50.596594  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:50.596601  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:50.596669  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:50.621102  291455 cri.go:89] found id: ""
	I1212 01:37:50.621124  291455 logs.go:282] 0 containers: []
	W1212 01:37:50.621132  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:50.621138  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:50.621196  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:50.645424  291455 cri.go:89] found id: ""
	I1212 01:37:50.645445  291455 logs.go:282] 0 containers: []
	W1212 01:37:50.645454  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:50.645461  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:50.645521  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:50.670456  291455 cri.go:89] found id: ""
	I1212 01:37:50.670479  291455 logs.go:282] 0 containers: []
	W1212 01:37:50.670487  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:50.670497  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:50.670508  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:50.726487  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:50.726519  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:50.740149  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:50.740178  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:50.846147  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:50.836239    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:50.837070    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:50.839024    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:50.839387    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:50.840598    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:50.836239    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:50.837070    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:50.839024    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:50.839387    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:50.840598    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:50.846174  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:50.846188  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:50.882509  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:50.882583  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:53.411213  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:53.421355  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:53.421422  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:53.444104  291455 cri.go:89] found id: ""
	I1212 01:37:53.444130  291455 logs.go:282] 0 containers: []
	W1212 01:37:53.444139  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:53.444146  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:53.444205  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:53.467938  291455 cri.go:89] found id: ""
	I1212 01:37:53.467963  291455 logs.go:282] 0 containers: []
	W1212 01:37:53.467972  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:53.467979  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:53.468038  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:53.492082  291455 cri.go:89] found id: ""
	I1212 01:37:53.492106  291455 logs.go:282] 0 containers: []
	W1212 01:37:53.492115  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:53.492122  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:53.492180  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:53.516011  291455 cri.go:89] found id: ""
	I1212 01:37:53.516040  291455 logs.go:282] 0 containers: []
	W1212 01:37:53.516049  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:53.516056  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:53.516115  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:53.543513  291455 cri.go:89] found id: ""
	I1212 01:37:53.543550  291455 logs.go:282] 0 containers: []
	W1212 01:37:53.543559  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:53.543565  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:53.543707  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:53.568681  291455 cri.go:89] found id: ""
	I1212 01:37:53.568705  291455 logs.go:282] 0 containers: []
	W1212 01:37:53.568713  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:53.568720  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:53.568797  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:53.593562  291455 cri.go:89] found id: ""
	I1212 01:37:53.593587  291455 logs.go:282] 0 containers: []
	W1212 01:37:53.593596  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:53.593602  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:53.593676  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:53.617634  291455 cri.go:89] found id: ""
	I1212 01:37:53.617658  291455 logs.go:282] 0 containers: []
	W1212 01:37:53.617667  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:53.617677  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:53.617691  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:53.672956  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:53.672991  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:53.686739  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:53.686767  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:53.753435  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:53.745274    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:53.746109    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:53.747777    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:53.748302    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:53.749767    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:53.745274    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:53.746109    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:53.747777    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:53.748302    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:53.749767    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:53.753456  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:53.753470  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:53.785303  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:53.785347  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:56.343327  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:56.353619  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:56.353686  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:56.377008  291455 cri.go:89] found id: ""
	I1212 01:37:56.377032  291455 logs.go:282] 0 containers: []
	W1212 01:37:56.377040  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:56.377047  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:56.377103  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:56.403572  291455 cri.go:89] found id: ""
	I1212 01:37:56.403599  291455 logs.go:282] 0 containers: []
	W1212 01:37:56.403607  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:56.403614  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:56.403677  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:56.427234  291455 cri.go:89] found id: ""
	I1212 01:37:56.427256  291455 logs.go:282] 0 containers: []
	W1212 01:37:56.427266  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:56.427272  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:56.427329  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:56.450300  291455 cri.go:89] found id: ""
	I1212 01:37:56.450325  291455 logs.go:282] 0 containers: []
	W1212 01:37:56.450334  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:56.450340  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:56.450399  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:56.478269  291455 cri.go:89] found id: ""
	I1212 01:37:56.478293  291455 logs.go:282] 0 containers: []
	W1212 01:37:56.478302  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:56.478308  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:56.478402  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:56.502839  291455 cri.go:89] found id: ""
	I1212 01:37:56.502863  291455 logs.go:282] 0 containers: []
	W1212 01:37:56.502872  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:56.502879  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:56.502939  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:56.528770  291455 cri.go:89] found id: ""
	I1212 01:37:56.528796  291455 logs.go:282] 0 containers: []
	W1212 01:37:56.528804  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:56.528810  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:56.528886  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:56.552625  291455 cri.go:89] found id: ""
	I1212 01:37:56.552687  291455 logs.go:282] 0 containers: []
	W1212 01:37:56.552701  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:56.552710  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:56.552722  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:56.582901  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:56.582929  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:56.638758  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:56.638790  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:56.652337  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:56.652364  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:56.718815  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:56.710468    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:56.711245    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:56.712862    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:56.713372    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:56.714933    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:56.710468    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:56.711245    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:56.712862    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:56.713372    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:56.714933    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:56.718853  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:56.718866  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:59.245105  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:59.255232  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:59.255300  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:59.280996  291455 cri.go:89] found id: ""
	I1212 01:37:59.281018  291455 logs.go:282] 0 containers: []
	W1212 01:37:59.281027  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:59.281033  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:59.281089  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:59.306870  291455 cri.go:89] found id: ""
	I1212 01:37:59.306893  291455 logs.go:282] 0 containers: []
	W1212 01:37:59.306901  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:59.306908  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:59.306967  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:59.332982  291455 cri.go:89] found id: ""
	I1212 01:37:59.333008  291455 logs.go:282] 0 containers: []
	W1212 01:37:59.333017  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:59.333022  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:59.333128  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:59.360799  291455 cri.go:89] found id: ""
	I1212 01:37:59.360824  291455 logs.go:282] 0 containers: []
	W1212 01:37:59.360833  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:59.360839  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:59.360897  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:59.383773  291455 cri.go:89] found id: ""
	I1212 01:37:59.383836  291455 logs.go:282] 0 containers: []
	W1212 01:37:59.383851  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:59.383858  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:59.383916  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:59.411933  291455 cri.go:89] found id: ""
	I1212 01:37:59.411958  291455 logs.go:282] 0 containers: []
	W1212 01:37:59.411966  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:59.411973  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:59.412073  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:59.437061  291455 cri.go:89] found id: ""
	I1212 01:37:59.437087  291455 logs.go:282] 0 containers: []
	W1212 01:37:59.437095  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:59.437102  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:59.437182  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:59.461853  291455 cri.go:89] found id: ""
	I1212 01:37:59.461877  291455 logs.go:282] 0 containers: []
	W1212 01:37:59.461886  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:59.461895  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:59.461907  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:59.493084  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:59.493111  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:59.549198  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:59.549229  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:59.562644  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:59.562674  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:59.627349  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:59.619195    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:59.619835    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:59.621508    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:59.622053    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:59.623671    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:59.619195    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:59.619835    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:59.621508    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:59.622053    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:59.623671    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:59.627373  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:59.627388  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:02.153040  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:02.163386  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:02.163465  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:02.188022  291455 cri.go:89] found id: ""
	I1212 01:38:02.188050  291455 logs.go:282] 0 containers: []
	W1212 01:38:02.188058  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:02.188064  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:02.188126  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:02.212051  291455 cri.go:89] found id: ""
	I1212 01:38:02.212088  291455 logs.go:282] 0 containers: []
	W1212 01:38:02.212097  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:02.212104  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:02.212163  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:02.236784  291455 cri.go:89] found id: ""
	I1212 01:38:02.236815  291455 logs.go:282] 0 containers: []
	W1212 01:38:02.236824  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:02.236831  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:02.236895  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:02.262277  291455 cri.go:89] found id: ""
	I1212 01:38:02.262301  291455 logs.go:282] 0 containers: []
	W1212 01:38:02.262310  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:02.262316  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:02.262375  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:02.286641  291455 cri.go:89] found id: ""
	I1212 01:38:02.286665  291455 logs.go:282] 0 containers: []
	W1212 01:38:02.286674  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:02.286680  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:02.286739  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:02.315696  291455 cri.go:89] found id: ""
	I1212 01:38:02.315721  291455 logs.go:282] 0 containers: []
	W1212 01:38:02.315729  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:02.315736  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:02.315796  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:02.341469  291455 cri.go:89] found id: ""
	I1212 01:38:02.341495  291455 logs.go:282] 0 containers: []
	W1212 01:38:02.341504  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:02.341511  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:02.341578  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:02.375601  291455 cri.go:89] found id: ""
	I1212 01:38:02.375626  291455 logs.go:282] 0 containers: []
	W1212 01:38:02.375634  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:02.375644  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:02.375656  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:02.388949  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:02.388978  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:02.458902  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:02.448758    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:02.449311    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:02.452630    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:02.453261    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:02.454829    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:02.448758    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:02.449311    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:02.452630    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:02.453261    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:02.454829    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:02.458924  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:02.458936  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:02.485359  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:02.485393  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:02.512676  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:02.512746  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:05.069728  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:05.084872  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:05.084975  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:05.130414  291455 cri.go:89] found id: ""
	I1212 01:38:05.130441  291455 logs.go:282] 0 containers: []
	W1212 01:38:05.130450  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:05.130457  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:05.130524  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:05.156129  291455 cri.go:89] found id: ""
	I1212 01:38:05.156154  291455 logs.go:282] 0 containers: []
	W1212 01:38:05.156163  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:05.156169  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:05.156230  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:05.182033  291455 cri.go:89] found id: ""
	I1212 01:38:05.182056  291455 logs.go:282] 0 containers: []
	W1212 01:38:05.182065  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:05.182071  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:05.182131  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:05.206795  291455 cri.go:89] found id: ""
	I1212 01:38:05.206821  291455 logs.go:282] 0 containers: []
	W1212 01:38:05.206830  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:05.206842  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:05.206903  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:05.231972  291455 cri.go:89] found id: ""
	I1212 01:38:05.231998  291455 logs.go:282] 0 containers: []
	W1212 01:38:05.232008  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:05.232014  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:05.232075  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:05.257476  291455 cri.go:89] found id: ""
	I1212 01:38:05.257501  291455 logs.go:282] 0 containers: []
	W1212 01:38:05.257509  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:05.257515  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:05.257576  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:05.282557  291455 cri.go:89] found id: ""
	I1212 01:38:05.282581  291455 logs.go:282] 0 containers: []
	W1212 01:38:05.282590  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:05.282595  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:05.282655  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:05.306866  291455 cri.go:89] found id: ""
	I1212 01:38:05.306891  291455 logs.go:282] 0 containers: []
	W1212 01:38:05.306899  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:05.306908  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:05.306919  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:05.363028  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:05.363073  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:05.376693  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:05.376722  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:05.445040  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:05.435873    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:05.436618    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:05.438470    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:05.439137    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:05.440737    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:05.435873    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:05.436618    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:05.438470    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:05.439137    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:05.440737    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:05.445059  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:05.445071  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:05.470893  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:05.470933  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:08.000563  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:08.015628  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:08.015701  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:08.081620  291455 cri.go:89] found id: ""
	I1212 01:38:08.081643  291455 logs.go:282] 0 containers: []
	W1212 01:38:08.081652  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:08.081661  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:08.081736  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:08.129116  291455 cri.go:89] found id: ""
	I1212 01:38:08.129137  291455 logs.go:282] 0 containers: []
	W1212 01:38:08.129146  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:08.129152  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:08.129208  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:08.154760  291455 cri.go:89] found id: ""
	I1212 01:38:08.154781  291455 logs.go:282] 0 containers: []
	W1212 01:38:08.154790  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:08.154797  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:08.154853  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:08.181948  291455 cri.go:89] found id: ""
	I1212 01:38:08.181971  291455 logs.go:282] 0 containers: []
	W1212 01:38:08.181981  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:08.181988  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:08.182052  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:08.206310  291455 cri.go:89] found id: ""
	I1212 01:38:08.206335  291455 logs.go:282] 0 containers: []
	W1212 01:38:08.206345  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:08.206351  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:08.206413  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:08.230579  291455 cri.go:89] found id: ""
	I1212 01:38:08.230606  291455 logs.go:282] 0 containers: []
	W1212 01:38:08.230615  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:08.230624  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:08.230690  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:08.259888  291455 cri.go:89] found id: ""
	I1212 01:38:08.259913  291455 logs.go:282] 0 containers: []
	W1212 01:38:08.259922  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:08.259928  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:08.260006  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:08.284903  291455 cri.go:89] found id: ""
	I1212 01:38:08.284927  291455 logs.go:282] 0 containers: []
	W1212 01:38:08.284936  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:08.284945  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:08.284957  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:08.341529  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:08.341565  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:08.355353  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:08.355394  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:08.418766  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:08.409488    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:08.410375    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:08.412414    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:08.413281    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:08.414948    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:08.409488    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:08.410375    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:08.412414    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:08.413281    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:08.414948    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:08.418789  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:08.418801  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:08.444616  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:08.444654  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:10.972656  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:10.983126  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:10.983206  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:11.011272  291455 cri.go:89] found id: ""
	I1212 01:38:11.011296  291455 logs.go:282] 0 containers: []
	W1212 01:38:11.011305  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:11.011311  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:11.011372  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:11.061173  291455 cri.go:89] found id: ""
	I1212 01:38:11.061199  291455 logs.go:282] 0 containers: []
	W1212 01:38:11.061208  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:11.061214  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:11.061273  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:11.124035  291455 cri.go:89] found id: ""
	I1212 01:38:11.124061  291455 logs.go:282] 0 containers: []
	W1212 01:38:11.124070  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:11.124077  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:11.124144  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:11.152861  291455 cri.go:89] found id: ""
	I1212 01:38:11.152900  291455 logs.go:282] 0 containers: []
	W1212 01:38:11.152910  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:11.152932  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:11.153005  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:11.178248  291455 cri.go:89] found id: ""
	I1212 01:38:11.178270  291455 logs.go:282] 0 containers: []
	W1212 01:38:11.178279  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:11.178285  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:11.178355  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:11.213235  291455 cri.go:89] found id: ""
	I1212 01:38:11.213260  291455 logs.go:282] 0 containers: []
	W1212 01:38:11.213269  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:11.213275  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:11.213337  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:11.238933  291455 cri.go:89] found id: ""
	I1212 01:38:11.238960  291455 logs.go:282] 0 containers: []
	W1212 01:38:11.238969  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:11.238975  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:11.239060  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:11.264115  291455 cri.go:89] found id: ""
	I1212 01:38:11.264137  291455 logs.go:282] 0 containers: []
	W1212 01:38:11.264146  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:11.264155  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:11.264167  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:11.320523  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:11.320561  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:11.334027  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:11.334059  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:11.411780  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:11.403056    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:11.403575    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:11.405319    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:11.405839    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:11.407505    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:11.403056    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:11.403575    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:11.405319    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:11.405839    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:11.407505    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:11.411803  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:11.411815  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:11.437459  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:11.437498  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:13.966371  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:13.976737  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:13.976807  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:14.002889  291455 cri.go:89] found id: ""
	I1212 01:38:14.002926  291455 logs.go:282] 0 containers: []
	W1212 01:38:14.002936  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:14.002943  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:14.003051  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:14.028607  291455 cri.go:89] found id: ""
	I1212 01:38:14.028632  291455 logs.go:282] 0 containers: []
	W1212 01:38:14.028640  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:14.028647  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:14.028707  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:14.068137  291455 cri.go:89] found id: ""
	I1212 01:38:14.068159  291455 logs.go:282] 0 containers: []
	W1212 01:38:14.068168  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:14.068174  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:14.068236  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:14.114047  291455 cri.go:89] found id: ""
	I1212 01:38:14.114068  291455 logs.go:282] 0 containers: []
	W1212 01:38:14.114077  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:14.114083  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:14.114142  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:14.143724  291455 cri.go:89] found id: ""
	I1212 01:38:14.143751  291455 logs.go:282] 0 containers: []
	W1212 01:38:14.143760  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:14.143766  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:14.143837  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:14.172821  291455 cri.go:89] found id: ""
	I1212 01:38:14.172844  291455 logs.go:282] 0 containers: []
	W1212 01:38:14.172853  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:14.172860  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:14.172922  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:14.201404  291455 cri.go:89] found id: ""
	I1212 01:38:14.201428  291455 logs.go:282] 0 containers: []
	W1212 01:38:14.201437  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:14.201443  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:14.201502  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:14.225421  291455 cri.go:89] found id: ""
	I1212 01:38:14.225445  291455 logs.go:282] 0 containers: []
	W1212 01:38:14.225454  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:14.225464  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:14.225475  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:14.281620  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:14.281655  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:14.295270  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:14.295297  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:14.361558  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:14.353174    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:14.353959    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:14.355541    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:14.356054    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:14.357617    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:14.353174    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:14.353959    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:14.355541    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:14.356054    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:14.357617    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:14.361580  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:14.361594  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:14.387622  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:14.387657  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:16.917930  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:16.928677  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:16.928747  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:16.956782  291455 cri.go:89] found id: ""
	I1212 01:38:16.956805  291455 logs.go:282] 0 containers: []
	W1212 01:38:16.956815  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:16.956821  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:16.956882  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:16.982223  291455 cri.go:89] found id: ""
	I1212 01:38:16.982255  291455 logs.go:282] 0 containers: []
	W1212 01:38:16.982264  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:16.982270  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:16.982337  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:17.011072  291455 cri.go:89] found id: ""
	I1212 01:38:17.011097  291455 logs.go:282] 0 containers: []
	W1212 01:38:17.011107  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:17.011114  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:17.011191  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:17.052070  291455 cri.go:89] found id: ""
	I1212 01:38:17.052096  291455 logs.go:282] 0 containers: []
	W1212 01:38:17.052104  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:17.052110  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:17.052177  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:17.084107  291455 cri.go:89] found id: ""
	I1212 01:38:17.084141  291455 logs.go:282] 0 containers: []
	W1212 01:38:17.084151  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:17.084157  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:17.084224  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:17.122692  291455 cri.go:89] found id: ""
	I1212 01:38:17.122766  291455 logs.go:282] 0 containers: []
	W1212 01:38:17.122797  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:17.122817  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:17.122923  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:17.156006  291455 cri.go:89] found id: ""
	I1212 01:38:17.156081  291455 logs.go:282] 0 containers: []
	W1212 01:38:17.156109  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:17.156129  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:17.156241  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:17.182169  291455 cri.go:89] found id: ""
	I1212 01:38:17.182240  291455 logs.go:282] 0 containers: []
	W1212 01:38:17.182264  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:17.182285  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:17.182335  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:17.237895  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:17.237933  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:17.252584  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:17.252654  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:17.321480  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:17.312815    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:17.313531    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:17.315204    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:17.315765    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:17.317270    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:17.312815    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:17.313531    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:17.315204    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:17.315765    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:17.317270    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:17.321502  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:17.321515  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:17.347596  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:17.347629  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:19.879967  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:19.890396  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:19.890464  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:19.918925  291455 cri.go:89] found id: ""
	I1212 01:38:19.918949  291455 logs.go:282] 0 containers: []
	W1212 01:38:19.918958  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:19.918964  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:19.919053  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:19.943584  291455 cri.go:89] found id: ""
	I1212 01:38:19.943610  291455 logs.go:282] 0 containers: []
	W1212 01:38:19.943619  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:19.943626  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:19.943681  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:19.969048  291455 cri.go:89] found id: ""
	I1212 01:38:19.969068  291455 logs.go:282] 0 containers: []
	W1212 01:38:19.969077  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:19.969083  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:19.969144  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:20.003773  291455 cri.go:89] found id: ""
	I1212 01:38:20.003795  291455 logs.go:282] 0 containers: []
	W1212 01:38:20.003804  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:20.003821  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:20.003894  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:20.066569  291455 cri.go:89] found id: ""
	I1212 01:38:20.066593  291455 logs.go:282] 0 containers: []
	W1212 01:38:20.066602  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:20.066608  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:20.066672  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:20.123787  291455 cri.go:89] found id: ""
	I1212 01:38:20.123818  291455 logs.go:282] 0 containers: []
	W1212 01:38:20.123828  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:20.123835  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:20.123902  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:20.148942  291455 cri.go:89] found id: ""
	I1212 01:38:20.148967  291455 logs.go:282] 0 containers: []
	W1212 01:38:20.148976  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:20.148982  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:20.149040  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:20.174974  291455 cri.go:89] found id: ""
	I1212 01:38:20.175019  291455 logs.go:282] 0 containers: []
	W1212 01:38:20.175028  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:20.175037  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:20.175049  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:20.188705  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:20.188734  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:20.257975  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:20.247998    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:20.248900    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:20.250615    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:20.251381    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:20.253188    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:20.247998    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:20.248900    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:20.250615    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:20.251381    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:20.253188    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:20.258004  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:20.258018  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:20.283558  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:20.283589  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:20.313552  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:20.313580  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:22.869782  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:22.880016  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:22.880091  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:22.903866  291455 cri.go:89] found id: ""
	I1212 01:38:22.903891  291455 logs.go:282] 0 containers: []
	W1212 01:38:22.903901  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:22.903908  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:22.903971  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:22.927721  291455 cri.go:89] found id: ""
	I1212 01:38:22.927744  291455 logs.go:282] 0 containers: []
	W1212 01:38:22.927752  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:22.927759  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:22.927816  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:22.952423  291455 cri.go:89] found id: ""
	I1212 01:38:22.952447  291455 logs.go:282] 0 containers: []
	W1212 01:38:22.952455  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:22.952461  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:22.952517  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:22.976598  291455 cri.go:89] found id: ""
	I1212 01:38:22.976620  291455 logs.go:282] 0 containers: []
	W1212 01:38:22.976628  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:22.976634  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:22.976691  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:23.003885  291455 cri.go:89] found id: ""
	I1212 01:38:23.003919  291455 logs.go:282] 0 containers: []
	W1212 01:38:23.003939  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:23.003947  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:23.004046  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:23.033013  291455 cri.go:89] found id: ""
	I1212 01:38:23.033036  291455 logs.go:282] 0 containers: []
	W1212 01:38:23.033045  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:23.033052  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:23.033112  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:23.092706  291455 cri.go:89] found id: ""
	I1212 01:38:23.092730  291455 logs.go:282] 0 containers: []
	W1212 01:38:23.092739  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:23.092745  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:23.092802  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:23.133640  291455 cri.go:89] found id: ""
	I1212 01:38:23.133668  291455 logs.go:282] 0 containers: []
	W1212 01:38:23.133676  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:23.133686  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:23.133697  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:23.196413  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:23.196452  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:23.209608  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:23.209634  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:23.275524  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:23.267738    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:23.268351    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:23.269907    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:23.270261    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:23.271739    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:23.267738    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:23.268351    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:23.269907    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:23.270261    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:23.271739    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:23.275547  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:23.275559  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:23.300618  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:23.300651  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:25.829093  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:25.839308  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:25.839392  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:25.862901  291455 cri.go:89] found id: ""
	I1212 01:38:25.862927  291455 logs.go:282] 0 containers: []
	W1212 01:38:25.862936  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:25.862942  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:25.863050  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:25.886878  291455 cri.go:89] found id: ""
	I1212 01:38:25.886912  291455 logs.go:282] 0 containers: []
	W1212 01:38:25.886921  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:25.886927  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:25.887012  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:25.912760  291455 cri.go:89] found id: ""
	I1212 01:38:25.912782  291455 logs.go:282] 0 containers: []
	W1212 01:38:25.912791  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:25.912799  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:25.912867  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:25.937385  291455 cri.go:89] found id: ""
	I1212 01:38:25.937409  291455 logs.go:282] 0 containers: []
	W1212 01:38:25.937418  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:25.937424  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:25.937482  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:25.961635  291455 cri.go:89] found id: ""
	I1212 01:38:25.961659  291455 logs.go:282] 0 containers: []
	W1212 01:38:25.961668  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:25.961674  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:25.961736  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:25.984780  291455 cri.go:89] found id: ""
	I1212 01:38:25.984804  291455 logs.go:282] 0 containers: []
	W1212 01:38:25.984814  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:25.984821  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:25.984886  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:26.013891  291455 cri.go:89] found id: ""
	I1212 01:38:26.013918  291455 logs.go:282] 0 containers: []
	W1212 01:38:26.013927  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:26.013933  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:26.013995  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:26.058178  291455 cri.go:89] found id: ""
	I1212 01:38:26.058203  291455 logs.go:282] 0 containers: []
	W1212 01:38:26.058212  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:26.058222  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:26.058233  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:26.145226  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:26.145265  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:26.159401  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:26.159430  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:26.224696  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:26.216061    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:26.217085    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:26.217937    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:26.219401    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:26.219913    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:26.216061    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:26.217085    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:26.217937    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:26.219401    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:26.219913    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:26.224716  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:26.224727  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:26.249818  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:26.249853  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:28.780686  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:28.791844  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:28.791927  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:28.820089  291455 cri.go:89] found id: ""
	I1212 01:38:28.820114  291455 logs.go:282] 0 containers: []
	W1212 01:38:28.820123  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:28.820129  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:28.820187  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:28.844073  291455 cri.go:89] found id: ""
	I1212 01:38:28.844097  291455 logs.go:282] 0 containers: []
	W1212 01:38:28.844106  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:28.844115  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:28.844173  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:28.874510  291455 cri.go:89] found id: ""
	I1212 01:38:28.874535  291455 logs.go:282] 0 containers: []
	W1212 01:38:28.874544  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:28.874550  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:28.874609  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:28.899593  291455 cri.go:89] found id: ""
	I1212 01:38:28.899667  291455 logs.go:282] 0 containers: []
	W1212 01:38:28.899683  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:28.899691  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:28.899749  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:28.923958  291455 cri.go:89] found id: ""
	I1212 01:38:28.923981  291455 logs.go:282] 0 containers: []
	W1212 01:38:28.923990  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:28.923996  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:28.924058  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:28.949188  291455 cri.go:89] found id: ""
	I1212 01:38:28.949217  291455 logs.go:282] 0 containers: []
	W1212 01:38:28.949225  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:28.949231  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:28.949307  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:28.974943  291455 cri.go:89] found id: ""
	I1212 01:38:28.974968  291455 logs.go:282] 0 containers: []
	W1212 01:38:28.974976  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:28.974982  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:28.975062  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:29.004380  291455 cri.go:89] found id: ""
	I1212 01:38:29.004475  291455 logs.go:282] 0 containers: []
	W1212 01:38:29.004501  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:29.004542  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:29.004572  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:29.021785  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:29.021856  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:29.143333  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:29.134378    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:29.134910    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:29.137306    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:29.137843    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:29.139511    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:29.134378    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:29.134910    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:29.137306    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:29.137843    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:29.139511    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:29.143354  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:29.143366  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:29.168668  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:29.168699  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:29.197133  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:29.197159  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:31.753888  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:31.765059  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:31.765150  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:31.790319  291455 cri.go:89] found id: ""
	I1212 01:38:31.790342  291455 logs.go:282] 0 containers: []
	W1212 01:38:31.790350  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:31.790357  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:31.790415  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:31.815400  291455 cri.go:89] found id: ""
	I1212 01:38:31.815424  291455 logs.go:282] 0 containers: []
	W1212 01:38:31.815434  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:31.815441  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:31.815502  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:31.840194  291455 cri.go:89] found id: ""
	I1212 01:38:31.840217  291455 logs.go:282] 0 containers: []
	W1212 01:38:31.840226  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:31.840231  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:31.840291  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:31.867911  291455 cri.go:89] found id: ""
	I1212 01:38:31.867935  291455 logs.go:282] 0 containers: []
	W1212 01:38:31.867943  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:31.867949  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:31.868008  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:31.892198  291455 cri.go:89] found id: ""
	I1212 01:38:31.892222  291455 logs.go:282] 0 containers: []
	W1212 01:38:31.892230  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:31.892238  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:31.892296  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:31.916890  291455 cri.go:89] found id: ""
	I1212 01:38:31.916914  291455 logs.go:282] 0 containers: []
	W1212 01:38:31.916923  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:31.916929  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:31.916988  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:31.942060  291455 cri.go:89] found id: ""
	I1212 01:38:31.942085  291455 logs.go:282] 0 containers: []
	W1212 01:38:31.942095  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:31.942102  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:31.942160  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:31.968817  291455 cri.go:89] found id: ""
	I1212 01:38:31.968839  291455 logs.go:282] 0 containers: []
	W1212 01:38:31.968848  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:31.968857  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:31.968871  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:31.997201  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:31.997227  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:32.062907  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:32.062945  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:32.079848  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:32.079874  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:32.172399  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:32.162924    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:32.163521    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:32.165105    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:32.165573    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:32.167197    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:32.162924    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:32.163521    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:32.165105    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:32.165573    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:32.167197    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:32.172421  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:32.172433  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:34.699204  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:34.710589  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:34.710660  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:34.734740  291455 cri.go:89] found id: ""
	I1212 01:38:34.734767  291455 logs.go:282] 0 containers: []
	W1212 01:38:34.734776  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:34.734782  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:34.734841  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:34.759636  291455 cri.go:89] found id: ""
	I1212 01:38:34.759659  291455 logs.go:282] 0 containers: []
	W1212 01:38:34.759667  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:34.759679  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:34.759739  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:34.785220  291455 cri.go:89] found id: ""
	I1212 01:38:34.785255  291455 logs.go:282] 0 containers: []
	W1212 01:38:34.785265  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:34.785271  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:34.785341  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:34.814480  291455 cri.go:89] found id: ""
	I1212 01:38:34.814502  291455 logs.go:282] 0 containers: []
	W1212 01:38:34.814510  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:34.814516  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:34.814580  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:34.840740  291455 cri.go:89] found id: ""
	I1212 01:38:34.840774  291455 logs.go:282] 0 containers: []
	W1212 01:38:34.840784  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:34.840790  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:34.840872  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:34.868875  291455 cri.go:89] found id: ""
	I1212 01:38:34.868898  291455 logs.go:282] 0 containers: []
	W1212 01:38:34.868907  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:34.868913  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:34.868973  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:34.897841  291455 cri.go:89] found id: ""
	I1212 01:38:34.897864  291455 logs.go:282] 0 containers: []
	W1212 01:38:34.897873  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:34.897879  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:34.897937  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:34.921846  291455 cri.go:89] found id: ""
	I1212 01:38:34.921869  291455 logs.go:282] 0 containers: []
	W1212 01:38:34.921877  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:34.921886  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:34.921897  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:34.935038  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:34.935066  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:35.007684  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:34.997327    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:34.997746    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:34.999039    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:34.999714    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:35.001615    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:34.997327    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:34.997746    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:34.999039    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:34.999714    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:35.001615    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:35.007755  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:35.007775  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:35.034750  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:35.034794  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:35.089747  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:35.089777  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:37.657148  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:37.668842  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:37.668917  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:37.696665  291455 cri.go:89] found id: ""
	I1212 01:38:37.696699  291455 logs.go:282] 0 containers: []
	W1212 01:38:37.696708  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:37.696720  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:37.696777  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:37.728956  291455 cri.go:89] found id: ""
	I1212 01:38:37.728979  291455 logs.go:282] 0 containers: []
	W1212 01:38:37.728987  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:37.728993  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:37.729058  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:37.753296  291455 cri.go:89] found id: ""
	I1212 01:38:37.753324  291455 logs.go:282] 0 containers: []
	W1212 01:38:37.753334  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:37.753340  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:37.753397  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:37.778445  291455 cri.go:89] found id: ""
	I1212 01:38:37.778471  291455 logs.go:282] 0 containers: []
	W1212 01:38:37.778481  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:37.778490  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:37.778548  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:37.807550  291455 cri.go:89] found id: ""
	I1212 01:38:37.807572  291455 logs.go:282] 0 containers: []
	W1212 01:38:37.807580  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:37.807587  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:37.807649  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:37.832292  291455 cri.go:89] found id: ""
	I1212 01:38:37.832315  291455 logs.go:282] 0 containers: []
	W1212 01:38:37.832323  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:37.832329  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:37.832386  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:37.856566  291455 cri.go:89] found id: ""
	I1212 01:38:37.856588  291455 logs.go:282] 0 containers: []
	W1212 01:38:37.856597  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:37.856602  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:37.856660  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:37.880677  291455 cri.go:89] found id: ""
	I1212 01:38:37.880741  291455 logs.go:282] 0 containers: []
	W1212 01:38:37.880766  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:37.880789  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:37.880820  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:37.910870  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:37.910908  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:37.938485  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:37.938520  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:37.993961  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:37.993995  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:38.010371  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:38.010404  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:38.096529  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:38.085475    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:38.086344    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:38.088104    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:38.088451    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:38.092325    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:38.085475    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:38.086344    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:38.088104    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:38.088451    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:38.092325    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:40.598418  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:40.609775  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:40.609847  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:40.635651  291455 cri.go:89] found id: ""
	I1212 01:38:40.635677  291455 logs.go:282] 0 containers: []
	W1212 01:38:40.635686  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:40.635693  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:40.635757  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:40.660863  291455 cri.go:89] found id: ""
	I1212 01:38:40.660889  291455 logs.go:282] 0 containers: []
	W1212 01:38:40.660898  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:40.660905  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:40.660966  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:40.685941  291455 cri.go:89] found id: ""
	I1212 01:38:40.686012  291455 logs.go:282] 0 containers: []
	W1212 01:38:40.686053  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:40.686078  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:40.686166  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:40.711525  291455 cri.go:89] found id: ""
	I1212 01:38:40.711554  291455 logs.go:282] 0 containers: []
	W1212 01:38:40.711563  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:40.711569  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:40.711630  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:40.737721  291455 cri.go:89] found id: ""
	I1212 01:38:40.737795  291455 logs.go:282] 0 containers: []
	W1212 01:38:40.737816  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:40.737836  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:40.737927  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:40.761337  291455 cri.go:89] found id: ""
	I1212 01:38:40.761402  291455 logs.go:282] 0 containers: []
	W1212 01:38:40.761424  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:40.761442  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:40.761525  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:40.786163  291455 cri.go:89] found id: ""
	I1212 01:38:40.786239  291455 logs.go:282] 0 containers: []
	W1212 01:38:40.786264  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:40.786285  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:40.786412  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:40.810546  291455 cri.go:89] found id: ""
	I1212 01:38:40.810610  291455 logs.go:282] 0 containers: []
	W1212 01:38:40.810634  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:40.810655  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:40.810694  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:40.866283  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:40.866320  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:40.879799  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:40.879834  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:40.945902  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:40.937611    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:40.938411    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:40.939975    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:40.940544    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:40.942091    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:40.937611    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:40.938411    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:40.939975    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:40.940544    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:40.942091    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:40.945925  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:40.945938  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:40.971267  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:40.971302  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:43.502022  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:43.513782  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:43.513855  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:43.538026  291455 cri.go:89] found id: ""
	I1212 01:38:43.538047  291455 logs.go:282] 0 containers: []
	W1212 01:38:43.538055  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:43.538060  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:43.538117  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:43.562296  291455 cri.go:89] found id: ""
	I1212 01:38:43.562320  291455 logs.go:282] 0 containers: []
	W1212 01:38:43.562329  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:43.562335  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:43.562399  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:43.585964  291455 cri.go:89] found id: ""
	I1212 01:38:43.585986  291455 logs.go:282] 0 containers: []
	W1212 01:38:43.585995  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:43.586001  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:43.586056  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:43.609636  291455 cri.go:89] found id: ""
	I1212 01:38:43.609658  291455 logs.go:282] 0 containers: []
	W1212 01:38:43.609666  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:43.609672  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:43.609729  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:43.634822  291455 cri.go:89] found id: ""
	I1212 01:38:43.634843  291455 logs.go:282] 0 containers: []
	W1212 01:38:43.634852  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:43.634857  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:43.634916  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:43.659517  291455 cri.go:89] found id: ""
	I1212 01:38:43.659539  291455 logs.go:282] 0 containers: []
	W1212 01:38:43.659553  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:43.659560  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:43.659619  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:43.684416  291455 cri.go:89] found id: ""
	I1212 01:38:43.684471  291455 logs.go:282] 0 containers: []
	W1212 01:38:43.684486  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:43.684493  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:43.684557  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:43.708909  291455 cri.go:89] found id: ""
	I1212 01:38:43.708931  291455 logs.go:282] 0 containers: []
	W1212 01:38:43.708939  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:43.708949  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:43.708961  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:43.764034  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:43.764069  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:43.778276  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:43.778304  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:43.849112  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:43.839330    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:43.839703    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:43.842808    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:43.843485    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:43.845319    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:43.839330    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:43.839703    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:43.842808    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:43.843485    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:43.845319    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:43.849132  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:43.849144  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:43.874790  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:43.874823  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:46.404666  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:46.415686  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:46.415772  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:46.446409  291455 cri.go:89] found id: ""
	I1212 01:38:46.446436  291455 logs.go:282] 0 containers: []
	W1212 01:38:46.446445  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:46.446452  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:46.446517  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:46.481137  291455 cri.go:89] found id: ""
	I1212 01:38:46.481160  291455 logs.go:282] 0 containers: []
	W1212 01:38:46.481169  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:46.481175  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:46.481258  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:46.506866  291455 cri.go:89] found id: ""
	I1212 01:38:46.506892  291455 logs.go:282] 0 containers: []
	W1212 01:38:46.506902  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:46.506908  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:46.506964  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:46.535109  291455 cri.go:89] found id: ""
	I1212 01:38:46.535185  291455 logs.go:282] 0 containers: []
	W1212 01:38:46.535208  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:46.535228  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:46.535312  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:46.559379  291455 cri.go:89] found id: ""
	I1212 01:38:46.559402  291455 logs.go:282] 0 containers: []
	W1212 01:38:46.559410  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:46.559417  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:46.559478  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:46.583642  291455 cri.go:89] found id: ""
	I1212 01:38:46.583717  291455 logs.go:282] 0 containers: []
	W1212 01:38:46.583738  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:46.583758  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:46.583842  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:46.608474  291455 cri.go:89] found id: ""
	I1212 01:38:46.608541  291455 logs.go:282] 0 containers: []
	W1212 01:38:46.608563  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:46.608578  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:46.608652  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:46.632905  291455 cri.go:89] found id: ""
	I1212 01:38:46.632982  291455 logs.go:282] 0 containers: []
	W1212 01:38:46.632997  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:46.633007  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:46.633018  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:46.689011  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:46.689048  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:46.702565  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:46.702592  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:46.772610  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:46.763145    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:46.764149    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:46.764820    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:46.766385    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:46.766678    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:46.763145    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:46.764149    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:46.764820    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:46.766385    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:46.766678    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:46.772629  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:46.772643  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:46.797690  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:46.797725  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:49.328051  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:49.341287  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:49.341360  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:49.378113  291455 cri.go:89] found id: ""
	I1212 01:38:49.378135  291455 logs.go:282] 0 containers: []
	W1212 01:38:49.378143  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:49.378149  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:49.378210  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:49.404269  291455 cri.go:89] found id: ""
	I1212 01:38:49.404291  291455 logs.go:282] 0 containers: []
	W1212 01:38:49.404300  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:49.404306  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:49.404364  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:49.428783  291455 cri.go:89] found id: ""
	I1212 01:38:49.428809  291455 logs.go:282] 0 containers: []
	W1212 01:38:49.428819  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:49.428825  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:49.428884  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:49.453856  291455 cri.go:89] found id: ""
	I1212 01:38:49.453889  291455 logs.go:282] 0 containers: []
	W1212 01:38:49.453898  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:49.453905  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:49.453965  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:49.480403  291455 cri.go:89] found id: ""
	I1212 01:38:49.480428  291455 logs.go:282] 0 containers: []
	W1212 01:38:49.480439  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:49.480445  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:49.480502  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:49.505527  291455 cri.go:89] found id: ""
	I1212 01:38:49.505594  291455 logs.go:282] 0 containers: []
	W1212 01:38:49.505617  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:49.505644  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:49.505740  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:49.529450  291455 cri.go:89] found id: ""
	I1212 01:38:49.529474  291455 logs.go:282] 0 containers: []
	W1212 01:38:49.529483  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:49.529489  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:49.529546  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:49.554349  291455 cri.go:89] found id: ""
	I1212 01:38:49.554412  291455 logs.go:282] 0 containers: []
	W1212 01:38:49.554435  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:49.554465  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:49.554493  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:49.611773  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:49.611805  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:49.625145  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:49.625169  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:49.689186  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:49.680639    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:49.681463    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:49.683157    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:49.683640    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:49.685287    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:49.680639    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:49.681463    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:49.683157    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:49.683640    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:49.685287    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:49.689208  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:49.689220  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:49.715241  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:49.715275  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:52.245578  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:52.255964  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:52.256032  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:52.288234  291455 cri.go:89] found id: ""
	I1212 01:38:52.288273  291455 logs.go:282] 0 containers: []
	W1212 01:38:52.288281  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:52.288287  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:52.288362  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:52.361726  291455 cri.go:89] found id: ""
	I1212 01:38:52.361756  291455 logs.go:282] 0 containers: []
	W1212 01:38:52.361765  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:52.361772  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:52.361848  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:52.390222  291455 cri.go:89] found id: ""
	I1212 01:38:52.390248  291455 logs.go:282] 0 containers: []
	W1212 01:38:52.390257  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:52.390262  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:52.390320  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:52.415677  291455 cri.go:89] found id: ""
	I1212 01:38:52.415712  291455 logs.go:282] 0 containers: []
	W1212 01:38:52.415721  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:52.415728  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:52.415796  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:52.440412  291455 cri.go:89] found id: ""
	I1212 01:38:52.440435  291455 logs.go:282] 0 containers: []
	W1212 01:38:52.440444  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:52.440450  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:52.440508  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:52.464172  291455 cri.go:89] found id: ""
	I1212 01:38:52.464203  291455 logs.go:282] 0 containers: []
	W1212 01:38:52.464212  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:52.464219  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:52.464278  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:52.496050  291455 cri.go:89] found id: ""
	I1212 01:38:52.496075  291455 logs.go:282] 0 containers: []
	W1212 01:38:52.496083  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:52.496089  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:52.496147  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:52.525249  291455 cri.go:89] found id: ""
	I1212 01:38:52.525271  291455 logs.go:282] 0 containers: []
	W1212 01:38:52.525279  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:52.525288  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:52.525299  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:52.580198  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:52.580233  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:52.593582  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:52.593648  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:52.659167  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:52.650803    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:52.651520    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:52.653182    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:52.653702    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:52.655438    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:52.650803    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:52.651520    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:52.653182    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:52.653702    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:52.655438    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:52.659187  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:52.659199  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:52.685268  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:52.685300  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:55.219025  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:55.229148  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:55.229222  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:55.252977  291455 cri.go:89] found id: ""
	I1212 01:38:55.253051  291455 logs.go:282] 0 containers: []
	W1212 01:38:55.253066  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:55.253077  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:55.253140  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:55.276881  291455 cri.go:89] found id: ""
	I1212 01:38:55.276945  291455 logs.go:282] 0 containers: []
	W1212 01:38:55.276959  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:55.276966  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:55.277024  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:55.316321  291455 cri.go:89] found id: ""
	I1212 01:38:55.316355  291455 logs.go:282] 0 containers: []
	W1212 01:38:55.316364  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:55.316370  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:55.316447  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:55.355675  291455 cri.go:89] found id: ""
	I1212 01:38:55.355703  291455 logs.go:282] 0 containers: []
	W1212 01:38:55.355711  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:55.355717  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:55.355791  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:55.394580  291455 cri.go:89] found id: ""
	I1212 01:38:55.394607  291455 logs.go:282] 0 containers: []
	W1212 01:38:55.394615  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:55.394621  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:55.394693  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:55.423340  291455 cri.go:89] found id: ""
	I1212 01:38:55.423363  291455 logs.go:282] 0 containers: []
	W1212 01:38:55.423371  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:55.423378  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:55.423436  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:55.447512  291455 cri.go:89] found id: ""
	I1212 01:38:55.447536  291455 logs.go:282] 0 containers: []
	W1212 01:38:55.447544  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:55.447550  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:55.447610  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:55.470830  291455 cri.go:89] found id: ""
	I1212 01:38:55.470853  291455 logs.go:282] 0 containers: []
	W1212 01:38:55.470867  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:55.470876  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:55.470886  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:55.528525  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:55.528561  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:55.541815  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:55.541843  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:55.605253  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:55.596889    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:55.597592    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:55.599233    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:55.599799    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:55.601358    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:55.596889    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:55.597592    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:55.599233    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:55.599799    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:55.601358    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:55.605280  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:55.605292  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:55.631237  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:55.631267  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:58.158753  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:58.169462  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:58.169546  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:58.194075  291455 cri.go:89] found id: ""
	I1212 01:38:58.194096  291455 logs.go:282] 0 containers: []
	W1212 01:38:58.194105  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:58.194111  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:58.194171  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:58.218468  291455 cri.go:89] found id: ""
	I1212 01:38:58.218546  291455 logs.go:282] 0 containers: []
	W1212 01:38:58.218569  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:58.218590  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:58.218675  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:58.242950  291455 cri.go:89] found id: ""
	I1212 01:38:58.242973  291455 logs.go:282] 0 containers: []
	W1212 01:38:58.242981  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:58.242987  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:58.243142  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:58.269403  291455 cri.go:89] found id: ""
	I1212 01:38:58.269423  291455 logs.go:282] 0 containers: []
	W1212 01:38:58.269432  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:58.269439  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:58.269502  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:58.317022  291455 cri.go:89] found id: ""
	I1212 01:38:58.317044  291455 logs.go:282] 0 containers: []
	W1212 01:38:58.317054  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:58.317059  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:58.317117  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:58.373414  291455 cri.go:89] found id: ""
	I1212 01:38:58.373486  291455 logs.go:282] 0 containers: []
	W1212 01:38:58.373511  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:58.373531  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:58.373619  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:58.404516  291455 cri.go:89] found id: ""
	I1212 01:38:58.404583  291455 logs.go:282] 0 containers: []
	W1212 01:38:58.404597  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:58.404604  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:58.404663  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:58.433096  291455 cri.go:89] found id: ""
	I1212 01:38:58.433120  291455 logs.go:282] 0 containers: []
	W1212 01:38:58.433131  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:58.433141  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:58.433170  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:58.495200  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:58.486845    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:58.487734    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:58.489310    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:58.489623    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:58.491296    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:58.486845    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:58.487734    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:58.489310    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:58.489623    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:58.491296    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:58.495223  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:58.495237  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:58.520595  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:58.520626  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:58.547636  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:58.547664  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:58.603945  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:58.603979  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:01.119071  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:01.130124  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:01.130196  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:01.155700  291455 cri.go:89] found id: ""
	I1212 01:39:01.155725  291455 logs.go:282] 0 containers: []
	W1212 01:39:01.155733  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:01.155740  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:01.155799  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:01.183985  291455 cri.go:89] found id: ""
	I1212 01:39:01.184012  291455 logs.go:282] 0 containers: []
	W1212 01:39:01.184021  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:01.184028  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:01.184095  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:01.211713  291455 cri.go:89] found id: ""
	I1212 01:39:01.211740  291455 logs.go:282] 0 containers: []
	W1212 01:39:01.211749  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:01.211756  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:01.211817  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:01.238159  291455 cri.go:89] found id: ""
	I1212 01:39:01.238185  291455 logs.go:282] 0 containers: []
	W1212 01:39:01.238195  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:01.238201  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:01.238265  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:01.264520  291455 cri.go:89] found id: ""
	I1212 01:39:01.264544  291455 logs.go:282] 0 containers: []
	W1212 01:39:01.264553  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:01.264560  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:01.264618  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:01.320162  291455 cri.go:89] found id: ""
	I1212 01:39:01.320191  291455 logs.go:282] 0 containers: []
	W1212 01:39:01.320200  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:01.320207  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:01.320276  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:01.367993  291455 cri.go:89] found id: ""
	I1212 01:39:01.368020  291455 logs.go:282] 0 containers: []
	W1212 01:39:01.368029  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:01.368037  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:01.368107  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:01.395205  291455 cri.go:89] found id: ""
	I1212 01:39:01.395230  291455 logs.go:282] 0 containers: []
	W1212 01:39:01.395239  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:01.395248  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:01.395260  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:01.450970  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:01.451049  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:01.464511  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:01.464540  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:01.529452  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:01.521771    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:01.522386    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:01.523907    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:01.524217    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:01.525703    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:01.521771    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:01.522386    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:01.523907    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:01.524217    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:01.525703    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:01.529472  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:01.529484  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:01.553702  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:01.553734  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:04.082286  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:04.093237  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:04.093313  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:04.118261  291455 cri.go:89] found id: ""
	I1212 01:39:04.118283  291455 logs.go:282] 0 containers: []
	W1212 01:39:04.118292  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:04.118298  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:04.118360  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:04.147714  291455 cri.go:89] found id: ""
	I1212 01:39:04.147736  291455 logs.go:282] 0 containers: []
	W1212 01:39:04.147745  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:04.147751  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:04.147815  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:04.172999  291455 cri.go:89] found id: ""
	I1212 01:39:04.173023  291455 logs.go:282] 0 containers: []
	W1212 01:39:04.173032  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:04.173039  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:04.173101  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:04.197081  291455 cri.go:89] found id: ""
	I1212 01:39:04.197103  291455 logs.go:282] 0 containers: []
	W1212 01:39:04.197111  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:04.197119  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:04.197176  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:04.220639  291455 cri.go:89] found id: ""
	I1212 01:39:04.220665  291455 logs.go:282] 0 containers: []
	W1212 01:39:04.220674  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:04.220681  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:04.220746  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:04.248901  291455 cri.go:89] found id: ""
	I1212 01:39:04.248926  291455 logs.go:282] 0 containers: []
	W1212 01:39:04.248935  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:04.248944  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:04.249011  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:04.274064  291455 cri.go:89] found id: ""
	I1212 01:39:04.274085  291455 logs.go:282] 0 containers: []
	W1212 01:39:04.274093  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:04.274099  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:04.274161  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:04.332510  291455 cri.go:89] found id: ""
	I1212 01:39:04.332535  291455 logs.go:282] 0 containers: []
	W1212 01:39:04.332545  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:04.332555  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:04.332572  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:04.368151  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:04.368189  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:04.403091  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:04.403118  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:04.459000  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:04.459031  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:04.472281  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:04.472306  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:04.534979  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:04.526363    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:04.527054    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:04.528724    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:04.529233    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:04.530692    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:04.526363    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:04.527054    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:04.528724    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:04.529233    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:04.530692    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:07.035447  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:07.046244  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:07.046313  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:07.072737  291455 cri.go:89] found id: ""
	I1212 01:39:07.072761  291455 logs.go:282] 0 containers: []
	W1212 01:39:07.072770  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:07.072776  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:07.072835  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:07.097400  291455 cri.go:89] found id: ""
	I1212 01:39:07.097423  291455 logs.go:282] 0 containers: []
	W1212 01:39:07.097431  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:07.097438  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:07.097496  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:07.121464  291455 cri.go:89] found id: ""
	I1212 01:39:07.121486  291455 logs.go:282] 0 containers: []
	W1212 01:39:07.121495  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:07.121501  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:07.121584  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:07.145780  291455 cri.go:89] found id: ""
	I1212 01:39:07.145800  291455 logs.go:282] 0 containers: []
	W1212 01:39:07.145808  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:07.145814  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:07.145870  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:07.169997  291455 cri.go:89] found id: ""
	I1212 01:39:07.170018  291455 logs.go:282] 0 containers: []
	W1212 01:39:07.170027  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:07.170033  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:07.170091  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:07.195061  291455 cri.go:89] found id: ""
	I1212 01:39:07.195088  291455 logs.go:282] 0 containers: []
	W1212 01:39:07.195096  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:07.195103  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:07.195161  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:07.220294  291455 cri.go:89] found id: ""
	I1212 01:39:07.220317  291455 logs.go:282] 0 containers: []
	W1212 01:39:07.220325  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:07.220331  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:07.220389  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:07.245551  291455 cri.go:89] found id: ""
	I1212 01:39:07.245576  291455 logs.go:282] 0 containers: []
	W1212 01:39:07.245586  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:07.245595  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:07.245607  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:07.277493  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:07.277521  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:07.344946  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:07.347238  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:07.376690  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:07.376714  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:07.447695  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:07.438862    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:07.439591    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:07.441334    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:07.441943    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:07.443673    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:07.438862    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:07.439591    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:07.441334    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:07.441943    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:07.443673    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:07.447717  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:07.447730  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:09.974214  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:09.987839  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:09.987921  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:10.025371  291455 cri.go:89] found id: ""
	I1212 01:39:10.025397  291455 logs.go:282] 0 containers: []
	W1212 01:39:10.025407  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:10.025413  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:10.025477  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:10.051333  291455 cri.go:89] found id: ""
	I1212 01:39:10.051357  291455 logs.go:282] 0 containers: []
	W1212 01:39:10.051366  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:10.051371  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:10.051436  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:10.075263  291455 cri.go:89] found id: ""
	I1212 01:39:10.075289  291455 logs.go:282] 0 containers: []
	W1212 01:39:10.075298  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:10.075305  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:10.075364  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:10.103331  291455 cri.go:89] found id: ""
	I1212 01:39:10.103355  291455 logs.go:282] 0 containers: []
	W1212 01:39:10.103364  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:10.103370  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:10.103431  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:10.128706  291455 cri.go:89] found id: ""
	I1212 01:39:10.128730  291455 logs.go:282] 0 containers: []
	W1212 01:39:10.128739  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:10.128746  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:10.128802  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:10.154605  291455 cri.go:89] found id: ""
	I1212 01:39:10.154627  291455 logs.go:282] 0 containers: []
	W1212 01:39:10.154637  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:10.154644  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:10.154703  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:10.179767  291455 cri.go:89] found id: ""
	I1212 01:39:10.179791  291455 logs.go:282] 0 containers: []
	W1212 01:39:10.179800  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:10.179806  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:10.179864  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:10.208346  291455 cri.go:89] found id: ""
	I1212 01:39:10.208369  291455 logs.go:282] 0 containers: []
	W1212 01:39:10.208376  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:10.208386  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:10.208397  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:10.263848  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:10.263883  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:10.279969  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:10.279994  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:10.405176  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:10.396616    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:10.397197    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:10.398853    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:10.399595    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:10.401217    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:10.396616    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:10.397197    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:10.398853    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:10.399595    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:10.401217    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:10.405198  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:10.405210  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:10.431360  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:10.431398  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:12.959344  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:12.971541  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:12.971628  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:13.006786  291455 cri.go:89] found id: ""
	I1212 01:39:13.006815  291455 logs.go:282] 0 containers: []
	W1212 01:39:13.006824  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:13.006830  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:13.006903  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:13.032106  291455 cri.go:89] found id: ""
	I1212 01:39:13.032127  291455 logs.go:282] 0 containers: []
	W1212 01:39:13.032135  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:13.032141  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:13.032200  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:13.057432  291455 cri.go:89] found id: ""
	I1212 01:39:13.057454  291455 logs.go:282] 0 containers: []
	W1212 01:39:13.057463  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:13.057469  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:13.057529  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:13.082502  291455 cri.go:89] found id: ""
	I1212 01:39:13.082524  291455 logs.go:282] 0 containers: []
	W1212 01:39:13.082532  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:13.082538  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:13.082595  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:13.108199  291455 cri.go:89] found id: ""
	I1212 01:39:13.108272  291455 logs.go:282] 0 containers: []
	W1212 01:39:13.108295  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:13.108323  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:13.108433  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:13.134284  291455 cri.go:89] found id: ""
	I1212 01:39:13.134356  291455 logs.go:282] 0 containers: []
	W1212 01:39:13.134379  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:13.134398  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:13.134485  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:13.159517  291455 cri.go:89] found id: ""
	I1212 01:39:13.159541  291455 logs.go:282] 0 containers: []
	W1212 01:39:13.159550  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:13.159556  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:13.159614  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:13.183175  291455 cri.go:89] found id: ""
	I1212 01:39:13.183199  291455 logs.go:282] 0 containers: []
	W1212 01:39:13.183207  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:13.183216  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:13.183232  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:13.241174  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:13.241210  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:13.254849  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:13.254880  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:13.381552  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:13.373347    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:13.373888    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:13.375400    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:13.375820    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:13.377000    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:13.373347    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:13.373888    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:13.375400    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:13.375820    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:13.377000    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:13.381573  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:13.381586  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:13.406354  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:13.406385  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:15.933099  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:15.943596  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:15.943674  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:15.966960  291455 cri.go:89] found id: ""
	I1212 01:39:15.967014  291455 logs.go:282] 0 containers: []
	W1212 01:39:15.967023  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:15.967030  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:15.967090  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:15.996145  291455 cri.go:89] found id: ""
	I1212 01:39:15.996167  291455 logs.go:282] 0 containers: []
	W1212 01:39:15.996175  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:15.996182  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:15.996239  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:16.025152  291455 cri.go:89] found id: ""
	I1212 01:39:16.025175  291455 logs.go:282] 0 containers: []
	W1212 01:39:16.025183  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:16.025191  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:16.025248  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:16.050231  291455 cri.go:89] found id: ""
	I1212 01:39:16.050264  291455 logs.go:282] 0 containers: []
	W1212 01:39:16.050273  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:16.050279  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:16.050345  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:16.076929  291455 cri.go:89] found id: ""
	I1212 01:39:16.076958  291455 logs.go:282] 0 containers: []
	W1212 01:39:16.076967  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:16.076975  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:16.077054  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:16.102241  291455 cri.go:89] found id: ""
	I1212 01:39:16.102273  291455 logs.go:282] 0 containers: []
	W1212 01:39:16.102282  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:16.102304  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:16.102383  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:16.126239  291455 cri.go:89] found id: ""
	I1212 01:39:16.126302  291455 logs.go:282] 0 containers: []
	W1212 01:39:16.126324  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:16.126344  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:16.126417  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:16.151645  291455 cri.go:89] found id: ""
	I1212 01:39:16.151674  291455 logs.go:282] 0 containers: []
	W1212 01:39:16.151683  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:16.151692  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:16.151702  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:16.176852  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:16.176882  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:16.206720  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:16.206746  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:16.262653  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:16.262686  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:16.275603  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:16.275634  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:16.359325  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:16.351492    8667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:16.351974    8667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:16.353266    8667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:16.353666    8667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:16.355275    8667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:16.351492    8667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:16.351974    8667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:16.353266    8667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:16.353666    8667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:16.355275    8667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:18.859963  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:18.870960  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:18.871050  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:18.910477  291455 cri.go:89] found id: ""
	I1212 01:39:18.910504  291455 logs.go:282] 0 containers: []
	W1212 01:39:18.910513  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:18.910519  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:18.910580  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:18.935189  291455 cri.go:89] found id: ""
	I1212 01:39:18.935212  291455 logs.go:282] 0 containers: []
	W1212 01:39:18.935221  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:18.935226  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:18.935282  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:18.960848  291455 cri.go:89] found id: ""
	I1212 01:39:18.960874  291455 logs.go:282] 0 containers: []
	W1212 01:39:18.960883  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:18.960888  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:18.960945  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:18.999545  291455 cri.go:89] found id: ""
	I1212 01:39:18.999572  291455 logs.go:282] 0 containers: []
	W1212 01:39:18.999581  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:18.999594  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:18.999657  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:19.037306  291455 cri.go:89] found id: ""
	I1212 01:39:19.037333  291455 logs.go:282] 0 containers: []
	W1212 01:39:19.037341  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:19.037347  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:19.037405  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:19.076075  291455 cri.go:89] found id: ""
	I1212 01:39:19.076096  291455 logs.go:282] 0 containers: []
	W1212 01:39:19.076104  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:19.076114  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:19.076168  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:19.106494  291455 cri.go:89] found id: ""
	I1212 01:39:19.106515  291455 logs.go:282] 0 containers: []
	W1212 01:39:19.106524  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:19.106529  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:19.106586  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:19.133049  291455 cri.go:89] found id: ""
	I1212 01:39:19.133073  291455 logs.go:282] 0 containers: []
	W1212 01:39:19.133082  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:19.133090  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:19.133105  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:19.218096  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:19.208102    8760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:19.208898    8760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:19.210671    8760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:19.211009    8760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:19.214074    8760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:19.208102    8760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:19.208898    8760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:19.210671    8760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:19.211009    8760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:19.214074    8760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:19.218119  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:19.218140  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:19.246120  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:19.246155  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:19.279088  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:19.279116  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:19.436253  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:19.436340  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:21.952490  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:21.962606  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:21.962676  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:21.986826  291455 cri.go:89] found id: ""
	I1212 01:39:21.986851  291455 logs.go:282] 0 containers: []
	W1212 01:39:21.986859  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:21.986866  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:21.986923  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:22.014517  291455 cri.go:89] found id: ""
	I1212 01:39:22.014541  291455 logs.go:282] 0 containers: []
	W1212 01:39:22.014551  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:22.014557  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:22.014623  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:22.041526  291455 cri.go:89] found id: ""
	I1212 01:39:22.041552  291455 logs.go:282] 0 containers: []
	W1212 01:39:22.041561  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:22.041568  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:22.041633  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:22.067041  291455 cri.go:89] found id: ""
	I1212 01:39:22.067069  291455 logs.go:282] 0 containers: []
	W1212 01:39:22.067079  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:22.067086  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:22.067149  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:22.092937  291455 cri.go:89] found id: ""
	I1212 01:39:22.092973  291455 logs.go:282] 0 containers: []
	W1212 01:39:22.092982  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:22.092988  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:22.093059  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:22.122005  291455 cri.go:89] found id: ""
	I1212 01:39:22.122031  291455 logs.go:282] 0 containers: []
	W1212 01:39:22.122039  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:22.122045  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:22.122107  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:22.147474  291455 cri.go:89] found id: ""
	I1212 01:39:22.147500  291455 logs.go:282] 0 containers: []
	W1212 01:39:22.147508  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:22.147515  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:22.147577  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:22.177172  291455 cri.go:89] found id: ""
	I1212 01:39:22.177199  291455 logs.go:282] 0 containers: []
	W1212 01:39:22.177208  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:22.177219  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:22.177231  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:22.234049  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:22.234083  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:22.247594  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:22.247619  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:22.368443  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:22.359792    8879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:22.360617    8879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:22.362109    8879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:22.362602    8879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:22.364143    8879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:22.359792    8879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:22.360617    8879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:22.362109    8879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:22.362602    8879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:22.364143    8879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:22.368462  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:22.368485  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:22.393929  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:22.393963  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:24.924468  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:24.934611  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:24.934679  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:24.960488  291455 cri.go:89] found id: ""
	I1212 01:39:24.960510  291455 logs.go:282] 0 containers: []
	W1212 01:39:24.960519  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:24.960524  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:24.960580  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:24.985199  291455 cri.go:89] found id: ""
	I1212 01:39:24.985222  291455 logs.go:282] 0 containers: []
	W1212 01:39:24.985231  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:24.985238  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:24.985295  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:25.017557  291455 cri.go:89] found id: ""
	I1212 01:39:25.017583  291455 logs.go:282] 0 containers: []
	W1212 01:39:25.017594  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:25.017601  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:25.017673  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:25.043724  291455 cri.go:89] found id: ""
	I1212 01:39:25.043756  291455 logs.go:282] 0 containers: []
	W1212 01:39:25.043766  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:25.043773  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:25.043836  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:25.068913  291455 cri.go:89] found id: ""
	I1212 01:39:25.068941  291455 logs.go:282] 0 containers: []
	W1212 01:39:25.068951  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:25.068958  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:25.069021  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:25.094251  291455 cri.go:89] found id: ""
	I1212 01:39:25.094274  291455 logs.go:282] 0 containers: []
	W1212 01:39:25.094282  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:25.094288  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:25.094347  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:25.118452  291455 cri.go:89] found id: ""
	I1212 01:39:25.118530  291455 logs.go:282] 0 containers: []
	W1212 01:39:25.118554  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:25.118575  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:25.118691  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:25.143548  291455 cri.go:89] found id: ""
	I1212 01:39:25.143571  291455 logs.go:282] 0 containers: []
	W1212 01:39:25.143584  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:25.143594  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:25.143605  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:25.201626  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:25.201662  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:25.214871  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:25.214900  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:25.278860  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:25.271096    8993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:25.271605    8993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:25.273123    8993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:25.273537    8993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:25.275035    8993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:25.271096    8993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:25.271605    8993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:25.273123    8993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:25.273537    8993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:25.275035    8993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:25.278890  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:25.278903  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:25.313862  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:25.313902  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:27.877952  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:27.888461  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:27.888534  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:27.912285  291455 cri.go:89] found id: ""
	I1212 01:39:27.912308  291455 logs.go:282] 0 containers: []
	W1212 01:39:27.912317  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:27.912323  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:27.912382  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:27.936668  291455 cri.go:89] found id: ""
	I1212 01:39:27.936693  291455 logs.go:282] 0 containers: []
	W1212 01:39:27.936701  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:27.936707  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:27.936763  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:27.964911  291455 cri.go:89] found id: ""
	I1212 01:39:27.964936  291455 logs.go:282] 0 containers: []
	W1212 01:39:27.964945  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:27.964952  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:27.965011  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:27.988509  291455 cri.go:89] found id: ""
	I1212 01:39:27.988530  291455 logs.go:282] 0 containers: []
	W1212 01:39:27.988539  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:27.988545  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:27.988606  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:28.014439  291455 cri.go:89] found id: ""
	I1212 01:39:28.014461  291455 logs.go:282] 0 containers: []
	W1212 01:39:28.014469  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:28.014475  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:28.014542  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:28.040611  291455 cri.go:89] found id: ""
	I1212 01:39:28.040637  291455 logs.go:282] 0 containers: []
	W1212 01:39:28.040646  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:28.040652  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:28.040711  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:28.064823  291455 cri.go:89] found id: ""
	I1212 01:39:28.064844  291455 logs.go:282] 0 containers: []
	W1212 01:39:28.064852  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:28.064858  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:28.064922  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:28.089374  291455 cri.go:89] found id: ""
	I1212 01:39:28.089397  291455 logs.go:282] 0 containers: []
	W1212 01:39:28.089405  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:28.089414  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:28.089426  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:28.146024  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:28.146058  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:28.160130  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:28.160159  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:28.225838  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:28.217551    9107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:28.218334    9107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:28.219917    9107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:28.220532    9107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:28.222161    9107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:28.217551    9107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:28.218334    9107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:28.219917    9107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:28.220532    9107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:28.222161    9107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:28.225864  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:28.225878  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:28.250733  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:28.250768  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:30.798068  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:30.808169  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:30.808239  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:30.836768  291455 cri.go:89] found id: ""
	I1212 01:39:30.836789  291455 logs.go:282] 0 containers: []
	W1212 01:39:30.836798  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:30.836805  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:30.836863  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:30.860144  291455 cri.go:89] found id: ""
	I1212 01:39:30.860169  291455 logs.go:282] 0 containers: []
	W1212 01:39:30.860179  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:30.860185  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:30.860242  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:30.884081  291455 cri.go:89] found id: ""
	I1212 01:39:30.884107  291455 logs.go:282] 0 containers: []
	W1212 01:39:30.884116  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:30.884122  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:30.884180  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:30.908110  291455 cri.go:89] found id: ""
	I1212 01:39:30.908133  291455 logs.go:282] 0 containers: []
	W1212 01:39:30.908147  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:30.908153  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:30.908213  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:30.934406  291455 cri.go:89] found id: ""
	I1212 01:39:30.934428  291455 logs.go:282] 0 containers: []
	W1212 01:39:30.934436  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:30.934449  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:30.934507  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:30.962854  291455 cri.go:89] found id: ""
	I1212 01:39:30.962877  291455 logs.go:282] 0 containers: []
	W1212 01:39:30.962885  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:30.962891  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:30.962963  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:30.986340  291455 cri.go:89] found id: ""
	I1212 01:39:30.986366  291455 logs.go:282] 0 containers: []
	W1212 01:39:30.986375  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:30.986385  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:30.986447  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:31.021526  291455 cri.go:89] found id: ""
	I1212 01:39:31.021557  291455 logs.go:282] 0 containers: []
	W1212 01:39:31.021567  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:31.021576  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:31.021586  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:31.080147  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:31.080186  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:31.094865  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:31.094894  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:31.159994  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:31.150850    9220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:31.151532    9220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:31.153295    9220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:31.153811    9220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:31.156044    9220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:31.150850    9220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:31.151532    9220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:31.153295    9220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:31.153811    9220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:31.156044    9220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:31.160017  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:31.160030  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:31.187806  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:31.187844  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:33.721677  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:33.732122  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:33.732196  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:33.756604  291455 cri.go:89] found id: ""
	I1212 01:39:33.756627  291455 logs.go:282] 0 containers: []
	W1212 01:39:33.756636  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:33.756642  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:33.756703  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:33.782055  291455 cri.go:89] found id: ""
	I1212 01:39:33.782079  291455 logs.go:282] 0 containers: []
	W1212 01:39:33.782088  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:33.782094  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:33.782150  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:33.806217  291455 cri.go:89] found id: ""
	I1212 01:39:33.806242  291455 logs.go:282] 0 containers: []
	W1212 01:39:33.806250  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:33.806256  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:33.806313  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:33.829556  291455 cri.go:89] found id: ""
	I1212 01:39:33.829580  291455 logs.go:282] 0 containers: []
	W1212 01:39:33.829588  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:33.829595  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:33.829651  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:33.856222  291455 cri.go:89] found id: ""
	I1212 01:39:33.856251  291455 logs.go:282] 0 containers: []
	W1212 01:39:33.856259  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:33.856265  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:33.856323  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:33.886601  291455 cri.go:89] found id: ""
	I1212 01:39:33.886624  291455 logs.go:282] 0 containers: []
	W1212 01:39:33.886639  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:33.886646  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:33.886703  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:33.910597  291455 cri.go:89] found id: ""
	I1212 01:39:33.910621  291455 logs.go:282] 0 containers: []
	W1212 01:39:33.910630  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:33.910636  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:33.910701  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:33.934158  291455 cri.go:89] found id: ""
	I1212 01:39:33.934185  291455 logs.go:282] 0 containers: []
	W1212 01:39:33.934193  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:33.934202  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:33.934214  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:33.958501  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:33.958533  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:33.986448  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:33.986476  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:34.042064  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:34.042099  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:34.056951  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:34.056977  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:34.127667  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:34.120136    9345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:34.120757    9345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:34.121818    9345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:34.122189    9345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:34.123766    9345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:34.120136    9345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:34.120757    9345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:34.121818    9345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:34.122189    9345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:34.123766    9345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:36.628762  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:36.639233  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:36.639305  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:36.663673  291455 cri.go:89] found id: ""
	I1212 01:39:36.663698  291455 logs.go:282] 0 containers: []
	W1212 01:39:36.663706  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:36.663713  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:36.663793  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:36.688862  291455 cri.go:89] found id: ""
	I1212 01:39:36.688887  291455 logs.go:282] 0 containers: []
	W1212 01:39:36.688895  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:36.688901  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:36.688963  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:36.713353  291455 cri.go:89] found id: ""
	I1212 01:39:36.713377  291455 logs.go:282] 0 containers: []
	W1212 01:39:36.713386  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:36.713392  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:36.713451  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:36.740680  291455 cri.go:89] found id: ""
	I1212 01:39:36.740747  291455 logs.go:282] 0 containers: []
	W1212 01:39:36.740762  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:36.740769  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:36.740831  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:36.765582  291455 cri.go:89] found id: ""
	I1212 01:39:36.765657  291455 logs.go:282] 0 containers: []
	W1212 01:39:36.765679  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:36.765700  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:36.765788  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:36.790002  291455 cri.go:89] found id: ""
	I1212 01:39:36.790025  291455 logs.go:282] 0 containers: []
	W1212 01:39:36.790034  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:36.790040  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:36.790099  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:36.816694  291455 cri.go:89] found id: ""
	I1212 01:39:36.816717  291455 logs.go:282] 0 containers: []
	W1212 01:39:36.816728  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:36.816734  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:36.816793  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:36.845138  291455 cri.go:89] found id: ""
	I1212 01:39:36.845202  291455 logs.go:282] 0 containers: []
	W1212 01:39:36.845218  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:36.845229  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:36.845241  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:36.903016  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:36.903054  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:36.918866  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:36.918903  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:36.984787  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:36.976973    9447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:36.977356    9447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:36.978879    9447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:36.979250    9447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:36.980908    9447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:36.976973    9447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:36.977356    9447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:36.978879    9447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:36.979250    9447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:36.980908    9447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:36.984810  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:36.984821  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:37.009360  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:37.009399  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:39.548914  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:39.562684  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:39.562807  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:39.598338  291455 cri.go:89] found id: ""
	I1212 01:39:39.598363  291455 logs.go:282] 0 containers: []
	W1212 01:39:39.598372  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:39.598378  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:39.598435  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:39.636963  291455 cri.go:89] found id: ""
	I1212 01:39:39.636985  291455 logs.go:282] 0 containers: []
	W1212 01:39:39.636993  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:39.636999  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:39.637057  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:39.662905  291455 cri.go:89] found id: ""
	I1212 01:39:39.662928  291455 logs.go:282] 0 containers: []
	W1212 01:39:39.662936  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:39.662942  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:39.663047  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:39.686743  291455 cri.go:89] found id: ""
	I1212 01:39:39.686808  291455 logs.go:282] 0 containers: []
	W1212 01:39:39.686819  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:39.686826  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:39.686892  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:39.710381  291455 cri.go:89] found id: ""
	I1212 01:39:39.710452  291455 logs.go:282] 0 containers: []
	W1212 01:39:39.710476  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:39.710496  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:39.710581  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:39.734865  291455 cri.go:89] found id: ""
	I1212 01:39:39.734894  291455 logs.go:282] 0 containers: []
	W1212 01:39:39.734903  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:39.734910  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:39.735019  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:39.762787  291455 cri.go:89] found id: ""
	I1212 01:39:39.762813  291455 logs.go:282] 0 containers: []
	W1212 01:39:39.762822  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:39.762828  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:39.762940  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:39.788339  291455 cri.go:89] found id: ""
	I1212 01:39:39.788368  291455 logs.go:282] 0 containers: []
	W1212 01:39:39.788378  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:39.788388  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:39.788417  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:39.843014  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:39.843046  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:39.856565  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:39.856593  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:39.921611  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:39.913029    9558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:39.913865    9558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:39.915691    9558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:39.916129    9558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:39.917607    9558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:39.913029    9558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:39.913865    9558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:39.915691    9558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:39.916129    9558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:39.917607    9558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:39.921632  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:39.921644  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:39.948006  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:39.948039  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:42.479881  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:42.490524  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:42.490602  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:42.515563  291455 cri.go:89] found id: ""
	I1212 01:39:42.515641  291455 logs.go:282] 0 containers: []
	W1212 01:39:42.515656  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:42.515664  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:42.515725  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:42.560104  291455 cri.go:89] found id: ""
	I1212 01:39:42.560137  291455 logs.go:282] 0 containers: []
	W1212 01:39:42.560145  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:42.560152  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:42.560226  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:42.597091  291455 cri.go:89] found id: ""
	I1212 01:39:42.597131  291455 logs.go:282] 0 containers: []
	W1212 01:39:42.597140  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:42.597147  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:42.597219  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:42.629203  291455 cri.go:89] found id: ""
	I1212 01:39:42.629233  291455 logs.go:282] 0 containers: []
	W1212 01:39:42.629242  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:42.629248  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:42.629312  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:42.657935  291455 cri.go:89] found id: ""
	I1212 01:39:42.657959  291455 logs.go:282] 0 containers: []
	W1212 01:39:42.657968  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:42.657974  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:42.658039  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:42.684776  291455 cri.go:89] found id: ""
	I1212 01:39:42.684806  291455 logs.go:282] 0 containers: []
	W1212 01:39:42.684815  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:42.684822  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:42.684879  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:42.709384  291455 cri.go:89] found id: ""
	I1212 01:39:42.709419  291455 logs.go:282] 0 containers: []
	W1212 01:39:42.709429  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:42.709435  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:42.709505  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:42.733686  291455 cri.go:89] found id: ""
	I1212 01:39:42.733728  291455 logs.go:282] 0 containers: []
	W1212 01:39:42.733737  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:42.733747  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:42.733758  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:42.758552  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:42.758630  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:42.787823  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:42.787852  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:42.845099  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:42.845135  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:42.858856  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:42.858904  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:42.924089  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:42.915364    9684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:42.915778    9684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:42.917282    9684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:42.918788    9684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:42.920024    9684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:42.915364    9684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:42.915778    9684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:42.917282    9684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:42.918788    9684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:42.920024    9684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:45.424349  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:45.434772  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:45.434853  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:45.459272  291455 cri.go:89] found id: ""
	I1212 01:39:45.459297  291455 logs.go:282] 0 containers: []
	W1212 01:39:45.459306  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:45.459351  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:45.459482  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:45.485201  291455 cri.go:89] found id: ""
	I1212 01:39:45.485235  291455 logs.go:282] 0 containers: []
	W1212 01:39:45.485244  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:45.485266  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:45.485348  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:45.508997  291455 cri.go:89] found id: ""
	I1212 01:39:45.509022  291455 logs.go:282] 0 containers: []
	W1212 01:39:45.509031  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:45.509037  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:45.509094  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:45.533177  291455 cri.go:89] found id: ""
	I1212 01:39:45.533209  291455 logs.go:282] 0 containers: []
	W1212 01:39:45.533218  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:45.533224  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:45.533289  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:45.594513  291455 cri.go:89] found id: ""
	I1212 01:39:45.594538  291455 logs.go:282] 0 containers: []
	W1212 01:39:45.594546  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:45.594553  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:45.594617  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:45.627865  291455 cri.go:89] found id: ""
	I1212 01:39:45.627903  291455 logs.go:282] 0 containers: []
	W1212 01:39:45.627913  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:45.627919  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:45.627987  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:45.655026  291455 cri.go:89] found id: ""
	I1212 01:39:45.655049  291455 logs.go:282] 0 containers: []
	W1212 01:39:45.655058  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:45.655064  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:45.655127  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:45.680560  291455 cri.go:89] found id: ""
	I1212 01:39:45.680635  291455 logs.go:282] 0 containers: []
	W1212 01:39:45.680650  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:45.680660  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:45.680672  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:45.744860  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:45.736290    9776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:45.736804    9776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:45.738503    9776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:45.739012    9776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:45.740483    9776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:45.736290    9776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:45.736804    9776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:45.738503    9776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:45.739012    9776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:45.740483    9776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:45.744886  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:45.744908  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:45.770100  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:45.770135  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:45.797429  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:45.797455  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:45.853262  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:45.853296  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:48.367022  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:48.378069  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:48.378145  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:48.403921  291455 cri.go:89] found id: ""
	I1212 01:39:48.403943  291455 logs.go:282] 0 containers: []
	W1212 01:39:48.403952  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:48.403958  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:48.404016  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:48.428988  291455 cri.go:89] found id: ""
	I1212 01:39:48.429012  291455 logs.go:282] 0 containers: []
	W1212 01:39:48.429020  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:48.429027  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:48.429084  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:48.453105  291455 cri.go:89] found id: ""
	I1212 01:39:48.453128  291455 logs.go:282] 0 containers: []
	W1212 01:39:48.453137  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:48.453143  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:48.453201  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:48.477514  291455 cri.go:89] found id: ""
	I1212 01:39:48.477536  291455 logs.go:282] 0 containers: []
	W1212 01:39:48.477546  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:48.477551  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:48.477612  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:48.506708  291455 cri.go:89] found id: ""
	I1212 01:39:48.506730  291455 logs.go:282] 0 containers: []
	W1212 01:39:48.506738  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:48.506743  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:48.506801  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:48.546136  291455 cri.go:89] found id: ""
	I1212 01:39:48.546158  291455 logs.go:282] 0 containers: []
	W1212 01:39:48.546166  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:48.546172  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:48.546230  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:48.584757  291455 cri.go:89] found id: ""
	I1212 01:39:48.584778  291455 logs.go:282] 0 containers: []
	W1212 01:39:48.584787  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:48.584792  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:48.584860  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:48.624953  291455 cri.go:89] found id: ""
	I1212 01:39:48.624973  291455 logs.go:282] 0 containers: []
	W1212 01:39:48.624981  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:48.624989  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:48.625000  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:48.682582  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:48.682616  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:48.696819  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:48.696847  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:48.761964  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:48.752888    9897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:48.753667    9897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:48.755472    9897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:48.756105    9897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:48.757916    9897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:48.752888    9897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:48.753667    9897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:48.755472    9897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:48.756105    9897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:48.757916    9897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:48.761982  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:48.761994  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:48.787735  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:48.787766  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:51.315518  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:51.325805  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:51.325878  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:51.348771  291455 cri.go:89] found id: ""
	I1212 01:39:51.348797  291455 logs.go:282] 0 containers: []
	W1212 01:39:51.348806  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:51.348812  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:51.348892  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:51.372310  291455 cri.go:89] found id: ""
	I1212 01:39:51.372384  291455 logs.go:282] 0 containers: []
	W1212 01:39:51.372399  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:51.372406  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:51.372463  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:51.410822  291455 cri.go:89] found id: ""
	I1212 01:39:51.410855  291455 logs.go:282] 0 containers: []
	W1212 01:39:51.410865  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:51.410871  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:51.410935  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:51.434671  291455 cri.go:89] found id: ""
	I1212 01:39:51.434702  291455 logs.go:282] 0 containers: []
	W1212 01:39:51.434710  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:51.434716  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:51.434783  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:51.459981  291455 cri.go:89] found id: ""
	I1212 01:39:51.460054  291455 logs.go:282] 0 containers: []
	W1212 01:39:51.460070  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:51.460077  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:51.460134  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:51.484764  291455 cri.go:89] found id: ""
	I1212 01:39:51.484788  291455 logs.go:282] 0 containers: []
	W1212 01:39:51.484802  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:51.484808  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:51.484864  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:51.508943  291455 cri.go:89] found id: ""
	I1212 01:39:51.508966  291455 logs.go:282] 0 containers: []
	W1212 01:39:51.508974  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:51.508981  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:51.509040  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:51.553470  291455 cri.go:89] found id: ""
	I1212 01:39:51.553497  291455 logs.go:282] 0 containers: []
	W1212 01:39:51.553505  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:51.553514  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:51.553525  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:51.653146  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:51.645161   10003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:51.645898   10003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:51.647455   10003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:51.647736   10003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:51.649182   10003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:51.645161   10003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:51.645898   10003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:51.647455   10003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:51.647736   10003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:51.649182   10003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:51.653168  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:51.653179  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:51.679418  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:51.679450  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:51.709581  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:51.709607  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:51.764844  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:51.764878  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:54.280411  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:54.290776  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:54.290856  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:54.315212  291455 cri.go:89] found id: ""
	I1212 01:39:54.315236  291455 logs.go:282] 0 containers: []
	W1212 01:39:54.315246  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:54.315253  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:54.315311  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:54.339856  291455 cri.go:89] found id: ""
	I1212 01:39:54.339881  291455 logs.go:282] 0 containers: []
	W1212 01:39:54.339890  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:54.339896  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:54.339958  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:54.368679  291455 cri.go:89] found id: ""
	I1212 01:39:54.368702  291455 logs.go:282] 0 containers: []
	W1212 01:39:54.368711  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:54.368717  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:54.368776  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:54.393467  291455 cri.go:89] found id: ""
	I1212 01:39:54.393491  291455 logs.go:282] 0 containers: []
	W1212 01:39:54.393500  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:54.393507  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:54.393566  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:54.418691  291455 cri.go:89] found id: ""
	I1212 01:39:54.418713  291455 logs.go:282] 0 containers: []
	W1212 01:39:54.418722  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:54.418728  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:54.418785  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:54.444722  291455 cri.go:89] found id: ""
	I1212 01:39:54.444745  291455 logs.go:282] 0 containers: []
	W1212 01:39:54.444759  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:54.444766  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:54.444824  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:54.470007  291455 cri.go:89] found id: ""
	I1212 01:39:54.470029  291455 logs.go:282] 0 containers: []
	W1212 01:39:54.470037  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:54.470043  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:54.470104  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:54.494270  291455 cri.go:89] found id: ""
	I1212 01:39:54.494340  291455 logs.go:282] 0 containers: []
	W1212 01:39:54.494354  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:54.494364  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:54.494403  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:54.599318  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:54.577598   10110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:54.579503   10110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:54.592839   10110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:54.593570   10110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:54.595243   10110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:54.577598   10110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:54.579503   10110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:54.592839   10110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:54.593570   10110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:54.595243   10110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:54.599389  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:54.599417  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:54.630152  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:54.630190  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:54.658141  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:54.658167  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:54.713516  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:54.713551  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:57.227361  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:57.237887  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:57.237955  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:57.262202  291455 cri.go:89] found id: ""
	I1212 01:39:57.262227  291455 logs.go:282] 0 containers: []
	W1212 01:39:57.262236  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:57.262242  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:57.262299  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:57.287795  291455 cri.go:89] found id: ""
	I1212 01:39:57.287819  291455 logs.go:282] 0 containers: []
	W1212 01:39:57.287828  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:57.287834  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:57.287900  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:57.312347  291455 cri.go:89] found id: ""
	I1212 01:39:57.312372  291455 logs.go:282] 0 containers: []
	W1212 01:39:57.312381  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:57.312387  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:57.312448  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:57.340890  291455 cri.go:89] found id: ""
	I1212 01:39:57.340914  291455 logs.go:282] 0 containers: []
	W1212 01:39:57.340924  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:57.340930  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:57.340994  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:57.364578  291455 cri.go:89] found id: ""
	I1212 01:39:57.364643  291455 logs.go:282] 0 containers: []
	W1212 01:39:57.364658  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:57.364666  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:57.364735  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:57.389147  291455 cri.go:89] found id: ""
	I1212 01:39:57.389175  291455 logs.go:282] 0 containers: []
	W1212 01:39:57.389184  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:57.389191  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:57.389248  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:57.415275  291455 cri.go:89] found id: ""
	I1212 01:39:57.415300  291455 logs.go:282] 0 containers: []
	W1212 01:39:57.415315  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:57.415322  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:57.415385  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:57.440087  291455 cri.go:89] found id: ""
	I1212 01:39:57.440109  291455 logs.go:282] 0 containers: []
	W1212 01:39:57.440118  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:57.440127  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:57.440138  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:57.467124  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:57.467150  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:57.522232  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:57.522269  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:57.538082  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:57.538160  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:57.643552  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:57.631917   10245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:57.632540   10245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:57.634329   10245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:57.634855   10245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:57.636638   10245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:57.631917   10245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:57.632540   10245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:57.634329   10245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:57.634855   10245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:57.636638   10245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:57.643574  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:57.643586  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:00.169313  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:00.228741  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:00.228823  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:00.303835  291455 cri.go:89] found id: ""
	I1212 01:40:00.305186  291455 logs.go:282] 0 containers: []
	W1212 01:40:00.305353  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:00.309177  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:00.309371  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:00.363791  291455 cri.go:89] found id: ""
	I1212 01:40:00.363817  291455 logs.go:282] 0 containers: []
	W1212 01:40:00.363826  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:00.363832  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:00.363904  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:00.428687  291455 cri.go:89] found id: ""
	I1212 01:40:00.428710  291455 logs.go:282] 0 containers: []
	W1212 01:40:00.428720  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:00.428727  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:00.428821  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:00.471696  291455 cri.go:89] found id: ""
	I1212 01:40:00.471723  291455 logs.go:282] 0 containers: []
	W1212 01:40:00.471732  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:00.471740  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:00.471820  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:00.509321  291455 cri.go:89] found id: ""
	I1212 01:40:00.509347  291455 logs.go:282] 0 containers: []
	W1212 01:40:00.509372  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:00.509381  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:00.509460  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:00.593692  291455 cri.go:89] found id: ""
	I1212 01:40:00.593716  291455 logs.go:282] 0 containers: []
	W1212 01:40:00.593725  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:00.593732  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:00.593800  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:00.662781  291455 cri.go:89] found id: ""
	I1212 01:40:00.662804  291455 logs.go:282] 0 containers: []
	W1212 01:40:00.662813  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:00.662819  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:00.662912  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:00.689999  291455 cri.go:89] found id: ""
	I1212 01:40:00.690023  291455 logs.go:282] 0 containers: []
	W1212 01:40:00.690031  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:00.690041  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:00.690053  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:00.747296  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:00.747331  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:00.761427  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:00.761454  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:00.828444  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:00.819830   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:00.820596   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:00.822241   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:00.822754   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:00.824365   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:00.819830   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:00.820596   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:00.822241   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:00.822754   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:00.824365   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:00.828466  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:00.828479  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:00.855218  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:00.855254  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:03.387867  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:03.398566  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:03.398659  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:03.427352  291455 cri.go:89] found id: ""
	I1212 01:40:03.427376  291455 logs.go:282] 0 containers: []
	W1212 01:40:03.427385  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:03.427391  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:03.427456  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:03.451979  291455 cri.go:89] found id: ""
	I1212 01:40:03.452054  291455 logs.go:282] 0 containers: []
	W1212 01:40:03.452069  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:03.452076  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:03.452150  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:03.475705  291455 cri.go:89] found id: ""
	I1212 01:40:03.475729  291455 logs.go:282] 0 containers: []
	W1212 01:40:03.475739  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:03.475744  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:03.475831  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:03.500258  291455 cri.go:89] found id: ""
	I1212 01:40:03.500283  291455 logs.go:282] 0 containers: []
	W1212 01:40:03.500293  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:03.500300  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:03.500360  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:03.528939  291455 cri.go:89] found id: ""
	I1212 01:40:03.528962  291455 logs.go:282] 0 containers: []
	W1212 01:40:03.528971  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:03.528976  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:03.529037  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:03.557541  291455 cri.go:89] found id: ""
	I1212 01:40:03.557566  291455 logs.go:282] 0 containers: []
	W1212 01:40:03.557575  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:03.557581  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:03.557645  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:03.611801  291455 cri.go:89] found id: ""
	I1212 01:40:03.611827  291455 logs.go:282] 0 containers: []
	W1212 01:40:03.611837  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:03.611843  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:03.611906  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:03.641008  291455 cri.go:89] found id: ""
	I1212 01:40:03.641034  291455 logs.go:282] 0 containers: []
	W1212 01:40:03.641043  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:03.641053  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:03.641064  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:03.696830  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:03.696868  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:03.710227  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:03.710256  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:03.777119  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:03.769143   10461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:03.769540   10461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:03.771066   10461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:03.771655   10461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:03.773341   10461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:03.769143   10461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:03.769540   10461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:03.771066   10461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:03.771655   10461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:03.773341   10461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:03.777184  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:03.777203  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:03.802465  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:03.802497  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:06.331826  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:06.342482  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:06.342547  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:06.366505  291455 cri.go:89] found id: ""
	I1212 01:40:06.366527  291455 logs.go:282] 0 containers: []
	W1212 01:40:06.366536  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:06.366542  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:06.366599  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:06.391672  291455 cri.go:89] found id: ""
	I1212 01:40:06.391696  291455 logs.go:282] 0 containers: []
	W1212 01:40:06.391705  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:06.391711  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:06.391774  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:06.416914  291455 cri.go:89] found id: ""
	I1212 01:40:06.416941  291455 logs.go:282] 0 containers: []
	W1212 01:40:06.416950  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:06.416956  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:06.417031  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:06.441562  291455 cri.go:89] found id: ""
	I1212 01:40:06.441584  291455 logs.go:282] 0 containers: []
	W1212 01:40:06.441599  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:06.441606  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:06.441665  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:06.469918  291455 cri.go:89] found id: ""
	I1212 01:40:06.469942  291455 logs.go:282] 0 containers: []
	W1212 01:40:06.469951  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:06.469957  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:06.470014  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:06.494455  291455 cri.go:89] found id: ""
	I1212 01:40:06.494478  291455 logs.go:282] 0 containers: []
	W1212 01:40:06.494487  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:06.494503  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:06.494579  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:06.520013  291455 cri.go:89] found id: ""
	I1212 01:40:06.520037  291455 logs.go:282] 0 containers: []
	W1212 01:40:06.520046  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:06.520052  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:06.520108  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:06.571478  291455 cri.go:89] found id: ""
	I1212 01:40:06.571509  291455 logs.go:282] 0 containers: []
	W1212 01:40:06.571518  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:06.571528  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:06.571539  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:06.616555  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:06.616594  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:06.657561  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:06.657589  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:06.715328  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:06.715409  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:06.728591  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:06.728620  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:06.792104  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:06.783643   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:06.784436   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:06.785957   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:06.786254   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:06.787744   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:06.783643   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:06.784436   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:06.785957   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:06.786254   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:06.787744   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:09.292912  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:09.303462  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:09.303537  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:09.329031  291455 cri.go:89] found id: ""
	I1212 01:40:09.329057  291455 logs.go:282] 0 containers: []
	W1212 01:40:09.329066  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:09.329072  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:09.329188  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:09.353474  291455 cri.go:89] found id: ""
	I1212 01:40:09.353498  291455 logs.go:282] 0 containers: []
	W1212 01:40:09.353507  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:09.353513  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:09.353570  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:09.380805  291455 cri.go:89] found id: ""
	I1212 01:40:09.380830  291455 logs.go:282] 0 containers: []
	W1212 01:40:09.380839  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:09.380845  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:09.380959  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:09.408831  291455 cri.go:89] found id: ""
	I1212 01:40:09.408854  291455 logs.go:282] 0 containers: []
	W1212 01:40:09.408862  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:09.408868  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:09.408943  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:09.433352  291455 cri.go:89] found id: ""
	I1212 01:40:09.433374  291455 logs.go:282] 0 containers: []
	W1212 01:40:09.433383  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:09.433389  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:09.433450  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:09.458129  291455 cri.go:89] found id: ""
	I1212 01:40:09.458149  291455 logs.go:282] 0 containers: []
	W1212 01:40:09.458158  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:09.458165  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:09.458222  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:09.484528  291455 cri.go:89] found id: ""
	I1212 01:40:09.484552  291455 logs.go:282] 0 containers: []
	W1212 01:40:09.484560  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:09.484567  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:09.484624  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:09.512777  291455 cri.go:89] found id: ""
	I1212 01:40:09.512802  291455 logs.go:282] 0 containers: []
	W1212 01:40:09.512811  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:09.512820  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:09.512831  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:09.563517  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:09.563545  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:09.660558  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:09.660595  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:09.674516  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:09.674541  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:09.738215  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:09.730040   10697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:09.730861   10697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:09.732394   10697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:09.732881   10697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:09.734347   10697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:09.730040   10697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:09.730861   10697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:09.732394   10697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:09.732881   10697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:09.734347   10697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:09.738241  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:09.738253  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:12.263748  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:12.273959  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:12.274029  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:12.297055  291455 cri.go:89] found id: ""
	I1212 01:40:12.297087  291455 logs.go:282] 0 containers: []
	W1212 01:40:12.297096  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:12.297118  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:12.297179  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:12.322284  291455 cri.go:89] found id: ""
	I1212 01:40:12.322308  291455 logs.go:282] 0 containers: []
	W1212 01:40:12.322317  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:12.322323  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:12.322397  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:12.345905  291455 cri.go:89] found id: ""
	I1212 01:40:12.345929  291455 logs.go:282] 0 containers: []
	W1212 01:40:12.345938  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:12.345944  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:12.346024  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:12.370571  291455 cri.go:89] found id: ""
	I1212 01:40:12.370593  291455 logs.go:282] 0 containers: []
	W1212 01:40:12.370602  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:12.370608  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:12.370695  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:12.397426  291455 cri.go:89] found id: ""
	I1212 01:40:12.397473  291455 logs.go:282] 0 containers: []
	W1212 01:40:12.397495  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:12.397514  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:12.397602  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:12.426531  291455 cri.go:89] found id: ""
	I1212 01:40:12.426556  291455 logs.go:282] 0 containers: []
	W1212 01:40:12.426564  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:12.426571  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:12.426644  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:12.450837  291455 cri.go:89] found id: ""
	I1212 01:40:12.450864  291455 logs.go:282] 0 containers: []
	W1212 01:40:12.450874  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:12.450882  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:12.450941  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:12.475392  291455 cri.go:89] found id: ""
	I1212 01:40:12.475415  291455 logs.go:282] 0 containers: []
	W1212 01:40:12.475423  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:12.475433  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:12.475443  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:12.500596  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:12.500630  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:12.539878  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:12.539912  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:12.636980  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:12.637024  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:12.651233  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:12.651261  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:12.719321  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:12.710320   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:12.711168   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:12.712905   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:12.713556   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:12.715342   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:12.710320   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:12.711168   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:12.712905   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:12.713556   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:12.715342   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:15.219607  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:15.230736  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:15.230837  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:15.255192  291455 cri.go:89] found id: ""
	I1212 01:40:15.255216  291455 logs.go:282] 0 containers: []
	W1212 01:40:15.255225  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:15.255250  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:15.255312  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:15.280065  291455 cri.go:89] found id: ""
	I1212 01:40:15.280088  291455 logs.go:282] 0 containers: []
	W1212 01:40:15.280097  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:15.280103  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:15.280182  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:15.305428  291455 cri.go:89] found id: ""
	I1212 01:40:15.305451  291455 logs.go:282] 0 containers: []
	W1212 01:40:15.305460  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:15.305467  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:15.305533  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:15.329513  291455 cri.go:89] found id: ""
	I1212 01:40:15.329537  291455 logs.go:282] 0 containers: []
	W1212 01:40:15.329545  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:15.329552  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:15.329612  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:15.353724  291455 cri.go:89] found id: ""
	I1212 01:40:15.353748  291455 logs.go:282] 0 containers: []
	W1212 01:40:15.353757  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:15.353764  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:15.353821  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:15.379891  291455 cri.go:89] found id: ""
	I1212 01:40:15.379921  291455 logs.go:282] 0 containers: []
	W1212 01:40:15.379930  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:15.379936  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:15.379994  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:15.410206  291455 cri.go:89] found id: ""
	I1212 01:40:15.410232  291455 logs.go:282] 0 containers: []
	W1212 01:40:15.410242  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:15.410249  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:15.410308  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:15.436574  291455 cri.go:89] found id: ""
	I1212 01:40:15.436607  291455 logs.go:282] 0 containers: []
	W1212 01:40:15.436616  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:15.436628  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:15.436640  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:15.496631  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:15.496672  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:15.511586  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:15.511614  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:15.643166  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:15.635198   10905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:15.635698   10905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:15.637279   10905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:15.637833   10905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:15.639441   10905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:15.635198   10905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:15.635698   10905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:15.637279   10905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:15.637833   10905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:15.639441   10905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:15.643192  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:15.643208  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:15.668006  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:15.668044  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:18.199232  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:18.210162  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:18.210237  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:18.235304  291455 cri.go:89] found id: ""
	I1212 01:40:18.235330  291455 logs.go:282] 0 containers: []
	W1212 01:40:18.235339  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:18.235347  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:18.235412  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:18.261126  291455 cri.go:89] found id: ""
	I1212 01:40:18.261149  291455 logs.go:282] 0 containers: []
	W1212 01:40:18.261157  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:18.261163  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:18.261225  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:18.285920  291455 cri.go:89] found id: ""
	I1212 01:40:18.285946  291455 logs.go:282] 0 containers: []
	W1212 01:40:18.285954  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:18.285961  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:18.286056  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:18.310447  291455 cri.go:89] found id: ""
	I1212 01:40:18.310490  291455 logs.go:282] 0 containers: []
	W1212 01:40:18.310500  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:18.310523  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:18.310601  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:18.334613  291455 cri.go:89] found id: ""
	I1212 01:40:18.334643  291455 logs.go:282] 0 containers: []
	W1212 01:40:18.334653  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:18.334659  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:18.334725  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:18.363763  291455 cri.go:89] found id: ""
	I1212 01:40:18.363787  291455 logs.go:282] 0 containers: []
	W1212 01:40:18.363797  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:18.363803  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:18.363864  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:18.389696  291455 cri.go:89] found id: ""
	I1212 01:40:18.389730  291455 logs.go:282] 0 containers: []
	W1212 01:40:18.389739  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:18.389745  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:18.389812  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:18.416961  291455 cri.go:89] found id: ""
	I1212 01:40:18.417035  291455 logs.go:282] 0 containers: []
	W1212 01:40:18.417059  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:18.417077  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:18.417104  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:18.474235  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:18.474268  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:18.487640  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:18.487666  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:18.567561  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:18.554594   11020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:18.555595   11020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:18.560540   11020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:18.560843   11020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:18.562408   11020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:18.554594   11020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:18.555595   11020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:18.560540   11020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:18.560843   11020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:18.562408   11020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:18.567584  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:18.567597  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:18.597523  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:18.597557  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:21.132296  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:21.142685  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:21.142760  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:21.171993  291455 cri.go:89] found id: ""
	I1212 01:40:21.172020  291455 logs.go:282] 0 containers: []
	W1212 01:40:21.172029  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:21.172035  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:21.172096  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:21.195907  291455 cri.go:89] found id: ""
	I1212 01:40:21.195929  291455 logs.go:282] 0 containers: []
	W1212 01:40:21.195938  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:21.195944  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:21.196007  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:21.219496  291455 cri.go:89] found id: ""
	I1212 01:40:21.219524  291455 logs.go:282] 0 containers: []
	W1212 01:40:21.219533  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:21.219540  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:21.219601  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:21.243807  291455 cri.go:89] found id: ""
	I1212 01:40:21.243834  291455 logs.go:282] 0 containers: []
	W1212 01:40:21.243844  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:21.243850  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:21.243910  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:21.268956  291455 cri.go:89] found id: ""
	I1212 01:40:21.268977  291455 logs.go:282] 0 containers: []
	W1212 01:40:21.268986  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:21.268993  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:21.269052  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:21.297557  291455 cri.go:89] found id: ""
	I1212 01:40:21.297580  291455 logs.go:282] 0 containers: []
	W1212 01:40:21.297588  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:21.297595  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:21.297652  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:21.321755  291455 cri.go:89] found id: ""
	I1212 01:40:21.321776  291455 logs.go:282] 0 containers: []
	W1212 01:40:21.321791  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:21.321798  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:21.321861  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:21.349054  291455 cri.go:89] found id: ""
	I1212 01:40:21.349076  291455 logs.go:282] 0 containers: []
	W1212 01:40:21.349085  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:21.349094  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:21.349108  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:21.374597  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:21.374636  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:21.403444  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:21.403469  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:21.461656  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:21.461690  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:21.475293  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:21.475320  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:21.560836  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:21.545907   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:21.546745   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:21.548429   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:21.548732   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:21.550543   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:21.545907   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:21.546745   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:21.548429   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:21.548732   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:21.550543   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:24.061094  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:24.071831  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:24.071913  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:24.097936  291455 cri.go:89] found id: ""
	I1212 01:40:24.097962  291455 logs.go:282] 0 containers: []
	W1212 01:40:24.097971  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:24.097978  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:24.098036  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:24.127785  291455 cri.go:89] found id: ""
	I1212 01:40:24.127809  291455 logs.go:282] 0 containers: []
	W1212 01:40:24.127819  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:24.127826  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:24.127889  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:24.153026  291455 cri.go:89] found id: ""
	I1212 01:40:24.153052  291455 logs.go:282] 0 containers: []
	W1212 01:40:24.153063  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:24.153068  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:24.153127  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:24.176972  291455 cri.go:89] found id: ""
	I1212 01:40:24.176997  291455 logs.go:282] 0 containers: []
	W1212 01:40:24.177006  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:24.177013  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:24.177073  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:24.213590  291455 cri.go:89] found id: ""
	I1212 01:40:24.213614  291455 logs.go:282] 0 containers: []
	W1212 01:40:24.213623  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:24.213638  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:24.213696  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:24.241058  291455 cri.go:89] found id: ""
	I1212 01:40:24.241084  291455 logs.go:282] 0 containers: []
	W1212 01:40:24.241092  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:24.241099  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:24.241158  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:24.265936  291455 cri.go:89] found id: ""
	I1212 01:40:24.265977  291455 logs.go:282] 0 containers: []
	W1212 01:40:24.265985  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:24.265991  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:24.266050  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:24.289751  291455 cri.go:89] found id: ""
	I1212 01:40:24.289779  291455 logs.go:282] 0 containers: []
	W1212 01:40:24.289788  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:24.289798  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:24.289809  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:24.316973  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:24.316999  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:24.372346  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:24.372380  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:24.385931  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:24.385960  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:24.453792  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:24.445261   11257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:24.445682   11257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:24.447332   11257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:24.447939   11257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:24.449784   11257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:24.445261   11257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:24.445682   11257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:24.447332   11257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:24.447939   11257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:24.449784   11257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:24.453813  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:24.453826  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:26.980134  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:26.991597  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:26.991671  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:27.019040  291455 cri.go:89] found id: ""
	I1212 01:40:27.019064  291455 logs.go:282] 0 containers: []
	W1212 01:40:27.019073  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:27.019080  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:27.019154  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:27.046812  291455 cri.go:89] found id: ""
	I1212 01:40:27.046841  291455 logs.go:282] 0 containers: []
	W1212 01:40:27.046854  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:27.046860  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:27.046968  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:27.071383  291455 cri.go:89] found id: ""
	I1212 01:40:27.071405  291455 logs.go:282] 0 containers: []
	W1212 01:40:27.071414  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:27.071420  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:27.071490  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:27.095638  291455 cri.go:89] found id: ""
	I1212 01:40:27.095663  291455 logs.go:282] 0 containers: []
	W1212 01:40:27.095672  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:27.095678  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:27.095755  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:27.119028  291455 cri.go:89] found id: ""
	I1212 01:40:27.119050  291455 logs.go:282] 0 containers: []
	W1212 01:40:27.119059  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:27.119064  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:27.119123  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:27.143722  291455 cri.go:89] found id: ""
	I1212 01:40:27.143748  291455 logs.go:282] 0 containers: []
	W1212 01:40:27.143757  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:27.143763  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:27.143839  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:27.167989  291455 cri.go:89] found id: ""
	I1212 01:40:27.168066  291455 logs.go:282] 0 containers: []
	W1212 01:40:27.168088  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:27.168097  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:27.168168  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:27.193229  291455 cri.go:89] found id: ""
	I1212 01:40:27.193269  291455 logs.go:282] 0 containers: []
	W1212 01:40:27.193279  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:27.193289  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:27.193304  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:27.248752  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:27.248788  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:27.262591  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:27.262627  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:27.329086  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:27.321229   11360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:27.321673   11360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:27.323243   11360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:27.323775   11360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:27.325351   11360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:27.321229   11360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:27.321673   11360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:27.323243   11360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:27.323775   11360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:27.325351   11360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:27.329111  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:27.329123  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:27.354405  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:27.354442  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:29.885003  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:29.896299  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:29.896378  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:29.922911  291455 cri.go:89] found id: ""
	I1212 01:40:29.922945  291455 logs.go:282] 0 containers: []
	W1212 01:40:29.922954  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:29.922961  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:29.923063  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:29.949238  291455 cri.go:89] found id: ""
	I1212 01:40:29.949264  291455 logs.go:282] 0 containers: []
	W1212 01:40:29.949273  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:29.949280  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:29.949338  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:29.974510  291455 cri.go:89] found id: ""
	I1212 01:40:29.974536  291455 logs.go:282] 0 containers: []
	W1212 01:40:29.974545  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:29.974551  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:29.974608  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:29.999116  291455 cri.go:89] found id: ""
	I1212 01:40:29.999142  291455 logs.go:282] 0 containers: []
	W1212 01:40:29.999151  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:29.999157  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:29.999223  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:30.078011  291455 cri.go:89] found id: ""
	I1212 01:40:30.078040  291455 logs.go:282] 0 containers: []
	W1212 01:40:30.078050  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:30.078058  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:30.078132  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:30.105966  291455 cri.go:89] found id: ""
	I1212 01:40:30.105993  291455 logs.go:282] 0 containers: []
	W1212 01:40:30.106003  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:30.106010  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:30.106078  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:30.134703  291455 cri.go:89] found id: ""
	I1212 01:40:30.134726  291455 logs.go:282] 0 containers: []
	W1212 01:40:30.134735  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:30.134780  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:30.134874  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:30.161984  291455 cri.go:89] found id: ""
	I1212 01:40:30.162009  291455 logs.go:282] 0 containers: []
	W1212 01:40:30.162018  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:30.162028  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:30.162039  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:30.193075  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:30.193103  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:30.252472  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:30.252508  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:30.266246  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:30.266276  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:30.333852  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:30.325323   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:30.325890   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:30.327426   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:30.327865   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:30.329291   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:30.325323   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:30.325890   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:30.327426   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:30.327865   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:30.329291   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:30.333874  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:30.333886  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:32.860948  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:32.872085  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:32.872163  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:32.901386  291455 cri.go:89] found id: ""
	I1212 01:40:32.901410  291455 logs.go:282] 0 containers: []
	W1212 01:40:32.901425  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:32.901438  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:32.901499  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:32.926818  291455 cri.go:89] found id: ""
	I1212 01:40:32.926844  291455 logs.go:282] 0 containers: []
	W1212 01:40:32.926853  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:32.926859  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:32.926927  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:32.956149  291455 cri.go:89] found id: ""
	I1212 01:40:32.956187  291455 logs.go:282] 0 containers: []
	W1212 01:40:32.956196  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:32.956202  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:32.956259  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:32.988134  291455 cri.go:89] found id: ""
	I1212 01:40:32.988159  291455 logs.go:282] 0 containers: []
	W1212 01:40:32.988168  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:32.988174  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:32.988231  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:33.014432  291455 cri.go:89] found id: ""
	I1212 01:40:33.014459  291455 logs.go:282] 0 containers: []
	W1212 01:40:33.014468  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:33.014474  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:33.014534  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:33.039814  291455 cri.go:89] found id: ""
	I1212 01:40:33.039843  291455 logs.go:282] 0 containers: []
	W1212 01:40:33.039852  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:33.039859  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:33.039921  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:33.068378  291455 cri.go:89] found id: ""
	I1212 01:40:33.068401  291455 logs.go:282] 0 containers: []
	W1212 01:40:33.068410  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:33.068417  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:33.068475  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:33.097661  291455 cri.go:89] found id: ""
	I1212 01:40:33.097725  291455 logs.go:282] 0 containers: []
	W1212 01:40:33.097750  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:33.097775  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:33.097803  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:33.129775  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:33.129802  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:33.189298  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:33.189332  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:33.202981  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:33.203026  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:33.264626  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:33.256228   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:33.256801   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:33.258449   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:33.259112   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:33.260717   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:33.256228   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:33.256801   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:33.258449   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:33.259112   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:33.260717   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:33.264648  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:33.264665  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:35.791109  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:35.807877  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:35.807951  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:35.851416  291455 cri.go:89] found id: ""
	I1212 01:40:35.851442  291455 logs.go:282] 0 containers: []
	W1212 01:40:35.851450  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:35.851456  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:35.851518  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:35.888920  291455 cri.go:89] found id: ""
	I1212 01:40:35.888943  291455 logs.go:282] 0 containers: []
	W1212 01:40:35.888952  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:35.888958  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:35.889018  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:35.915592  291455 cri.go:89] found id: ""
	I1212 01:40:35.915618  291455 logs.go:282] 0 containers: []
	W1212 01:40:35.915628  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:35.915634  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:35.915715  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:35.939272  291455 cri.go:89] found id: ""
	I1212 01:40:35.939296  291455 logs.go:282] 0 containers: []
	W1212 01:40:35.939305  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:35.939311  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:35.939370  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:35.968216  291455 cri.go:89] found id: ""
	I1212 01:40:35.968244  291455 logs.go:282] 0 containers: []
	W1212 01:40:35.968253  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:35.968259  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:35.968317  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:35.993761  291455 cri.go:89] found id: ""
	I1212 01:40:35.993785  291455 logs.go:282] 0 containers: []
	W1212 01:40:35.993796  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:35.993803  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:35.993863  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:36.022585  291455 cri.go:89] found id: ""
	I1212 01:40:36.022612  291455 logs.go:282] 0 containers: []
	W1212 01:40:36.022633  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:36.022640  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:36.022712  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:36.052933  291455 cri.go:89] found id: ""
	I1212 01:40:36.052955  291455 logs.go:282] 0 containers: []
	W1212 01:40:36.052965  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:36.052974  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:36.052991  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:36.122317  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:36.113883   11700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:36.114412   11700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:36.115894   11700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:36.116408   11700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:36.118260   11700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:36.113883   11700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:36.114412   11700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:36.115894   11700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:36.116408   11700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:36.118260   11700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:36.122340  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:36.122353  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:36.146907  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:36.146940  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:36.174411  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:36.174444  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:36.229229  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:36.229259  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:38.742843  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:38.753061  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:38.753132  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:38.777998  291455 cri.go:89] found id: ""
	I1212 01:40:38.778024  291455 logs.go:282] 0 containers: []
	W1212 01:40:38.778033  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:38.778039  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:38.778098  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:38.819601  291455 cri.go:89] found id: ""
	I1212 01:40:38.819630  291455 logs.go:282] 0 containers: []
	W1212 01:40:38.819639  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:38.819649  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:38.819726  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:38.863492  291455 cri.go:89] found id: ""
	I1212 01:40:38.863555  291455 logs.go:282] 0 containers: []
	W1212 01:40:38.863567  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:38.863574  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:38.863640  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:38.896081  291455 cri.go:89] found id: ""
	I1212 01:40:38.896109  291455 logs.go:282] 0 containers: []
	W1212 01:40:38.896118  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:38.896124  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:38.896189  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:38.923782  291455 cri.go:89] found id: ""
	I1212 01:40:38.923824  291455 logs.go:282] 0 containers: []
	W1212 01:40:38.923832  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:38.923838  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:38.923896  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:38.948257  291455 cri.go:89] found id: ""
	I1212 01:40:38.948289  291455 logs.go:282] 0 containers: []
	W1212 01:40:38.948305  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:38.948312  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:38.948379  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:38.974066  291455 cri.go:89] found id: ""
	I1212 01:40:38.974090  291455 logs.go:282] 0 containers: []
	W1212 01:40:38.974098  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:38.974104  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:38.974163  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:38.999566  291455 cri.go:89] found id: ""
	I1212 01:40:38.999654  291455 logs.go:282] 0 containers: []
	W1212 01:40:38.999670  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:38.999681  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:38.999693  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:39.032809  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:39.032845  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:39.061204  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:39.061234  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:39.116485  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:39.116516  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:39.129984  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:39.130014  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:39.195391  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:39.187100   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:39.187857   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:39.189545   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:39.190069   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:39.191706   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:39.187100   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:39.187857   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:39.189545   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:39.190069   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:39.191706   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:41.695676  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:41.707011  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:41.707085  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:41.731224  291455 cri.go:89] found id: ""
	I1212 01:40:41.731295  291455 logs.go:282] 0 containers: []
	W1212 01:40:41.731318  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:41.731337  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:41.731422  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:41.759193  291455 cri.go:89] found id: ""
	I1212 01:40:41.759266  291455 logs.go:282] 0 containers: []
	W1212 01:40:41.759289  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:41.759308  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:41.759394  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:41.793923  291455 cri.go:89] found id: ""
	I1212 01:40:41.793994  291455 logs.go:282] 0 containers: []
	W1212 01:40:41.794017  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:41.794038  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:41.794121  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:41.844183  291455 cri.go:89] found id: ""
	I1212 01:40:41.844246  291455 logs.go:282] 0 containers: []
	W1212 01:40:41.844277  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:41.844297  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:41.844405  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:41.880181  291455 cri.go:89] found id: ""
	I1212 01:40:41.880253  291455 logs.go:282] 0 containers: []
	W1212 01:40:41.880288  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:41.880312  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:41.880412  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:41.908685  291455 cri.go:89] found id: ""
	I1212 01:40:41.908760  291455 logs.go:282] 0 containers: []
	W1212 01:40:41.908776  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:41.908783  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:41.908840  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:41.933232  291455 cri.go:89] found id: ""
	I1212 01:40:41.933257  291455 logs.go:282] 0 containers: []
	W1212 01:40:41.933265  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:41.933272  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:41.933361  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:41.957941  291455 cri.go:89] found id: ""
	I1212 01:40:41.957966  291455 logs.go:282] 0 containers: []
	W1212 01:40:41.957975  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:41.957993  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:41.958004  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:42.012839  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:42.012878  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:42.028378  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:42.028410  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:42.099435  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:42.089806   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:42.091059   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:42.092313   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:42.093469   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:42.094522   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:42.089806   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:42.091059   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:42.092313   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:42.093469   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:42.094522   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:42.099461  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:42.099477  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:42.127956  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:42.127997  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:44.666695  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:44.677340  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:44.677417  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:44.701562  291455 cri.go:89] found id: ""
	I1212 01:40:44.701585  291455 logs.go:282] 0 containers: []
	W1212 01:40:44.701594  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:44.701600  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:44.701657  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:44.726430  291455 cri.go:89] found id: ""
	I1212 01:40:44.726452  291455 logs.go:282] 0 containers: []
	W1212 01:40:44.726460  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:44.726466  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:44.726555  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:44.755275  291455 cri.go:89] found id: ""
	I1212 01:40:44.755298  291455 logs.go:282] 0 containers: []
	W1212 01:40:44.755306  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:44.755312  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:44.755367  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:44.780079  291455 cri.go:89] found id: ""
	I1212 01:40:44.780105  291455 logs.go:282] 0 containers: []
	W1212 01:40:44.780114  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:44.780120  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:44.780194  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:44.869405  291455 cri.go:89] found id: ""
	I1212 01:40:44.869429  291455 logs.go:282] 0 containers: []
	W1212 01:40:44.869437  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:44.869444  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:44.869510  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:44.895160  291455 cri.go:89] found id: ""
	I1212 01:40:44.895186  291455 logs.go:282] 0 containers: []
	W1212 01:40:44.895195  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:44.895201  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:44.895258  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:44.919698  291455 cri.go:89] found id: ""
	I1212 01:40:44.919721  291455 logs.go:282] 0 containers: []
	W1212 01:40:44.919730  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:44.919736  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:44.919792  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:44.944054  291455 cri.go:89] found id: ""
	I1212 01:40:44.944076  291455 logs.go:282] 0 containers: []
	W1212 01:40:44.944085  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:44.944093  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:44.944104  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:44.968670  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:44.968701  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:44.997722  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:44.997750  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:45.076118  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:45.076163  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:45.092613  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:45.092646  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:45.185594  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:45.175075   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:45.176253   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:45.177119   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:45.179849   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:45.180652   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:45.175075   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:45.176253   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:45.177119   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:45.179849   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:45.180652   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:47.686812  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:47.697462  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:47.697534  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:47.725301  291455 cri.go:89] found id: ""
	I1212 01:40:47.725327  291455 logs.go:282] 0 containers: []
	W1212 01:40:47.725336  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:47.725342  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:47.725406  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:47.750015  291455 cri.go:89] found id: ""
	I1212 01:40:47.750040  291455 logs.go:282] 0 containers: []
	W1212 01:40:47.750050  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:47.750057  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:47.750116  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:47.774576  291455 cri.go:89] found id: ""
	I1212 01:40:47.774604  291455 logs.go:282] 0 containers: []
	W1212 01:40:47.774613  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:47.774620  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:47.774679  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:47.823337  291455 cri.go:89] found id: ""
	I1212 01:40:47.823365  291455 logs.go:282] 0 containers: []
	W1212 01:40:47.823374  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:47.823381  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:47.823451  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:47.863754  291455 cri.go:89] found id: ""
	I1212 01:40:47.863776  291455 logs.go:282] 0 containers: []
	W1212 01:40:47.863785  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:47.863791  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:47.863851  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:47.892358  291455 cri.go:89] found id: ""
	I1212 01:40:47.892383  291455 logs.go:282] 0 containers: []
	W1212 01:40:47.892391  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:47.892398  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:47.892463  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:47.916778  291455 cri.go:89] found id: ""
	I1212 01:40:47.916805  291455 logs.go:282] 0 containers: []
	W1212 01:40:47.916815  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:47.916821  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:47.916900  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:47.942154  291455 cri.go:89] found id: ""
	I1212 01:40:47.942177  291455 logs.go:282] 0 containers: []
	W1212 01:40:47.942185  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:47.942194  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:47.942208  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:47.955644  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:47.955725  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:48.027299  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:48.016837   12155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:48.017641   12155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:48.019747   12155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:48.020636   12155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:48.022832   12155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:48.016837   12155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:48.017641   12155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:48.019747   12155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:48.020636   12155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:48.022832   12155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:48.027326  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:48.027340  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:48.052933  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:48.052970  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:48.089641  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:48.089674  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:50.649196  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:50.660069  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:50.660143  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:50.685271  291455 cri.go:89] found id: ""
	I1212 01:40:50.685299  291455 logs.go:282] 0 containers: []
	W1212 01:40:50.685309  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:50.685316  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:50.685378  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:50.712999  291455 cri.go:89] found id: ""
	I1212 01:40:50.713025  291455 logs.go:282] 0 containers: []
	W1212 01:40:50.713034  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:50.713040  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:50.713099  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:50.737720  291455 cri.go:89] found id: ""
	I1212 01:40:50.737745  291455 logs.go:282] 0 containers: []
	W1212 01:40:50.737754  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:50.737761  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:50.737828  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:50.763261  291455 cri.go:89] found id: ""
	I1212 01:40:50.763286  291455 logs.go:282] 0 containers: []
	W1212 01:40:50.763294  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:50.763300  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:50.763358  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:50.811665  291455 cri.go:89] found id: ""
	I1212 01:40:50.811692  291455 logs.go:282] 0 containers: []
	W1212 01:40:50.811701  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:50.811707  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:50.811768  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:50.870884  291455 cri.go:89] found id: ""
	I1212 01:40:50.870909  291455 logs.go:282] 0 containers: []
	W1212 01:40:50.870921  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:50.870927  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:50.870986  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:50.896362  291455 cri.go:89] found id: ""
	I1212 01:40:50.896387  291455 logs.go:282] 0 containers: []
	W1212 01:40:50.896395  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:50.896401  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:50.896457  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:50.924933  291455 cri.go:89] found id: ""
	I1212 01:40:50.924956  291455 logs.go:282] 0 containers: []
	W1212 01:40:50.924964  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:50.924974  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:50.924986  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:50.982505  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:50.982537  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:50.996444  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:50.996467  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:51.075810  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:51.067309   12269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:51.068132   12269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:51.069797   12269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:51.070277   12269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:51.071678   12269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:51.067309   12269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:51.068132   12269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:51.069797   12269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:51.070277   12269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:51.071678   12269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:51.075896  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:51.075929  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:51.100541  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:51.100577  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:53.629887  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:53.640204  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:53.640274  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:53.665408  291455 cri.go:89] found id: ""
	I1212 01:40:53.665487  291455 logs.go:282] 0 containers: []
	W1212 01:40:53.665511  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:53.665531  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:53.665616  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:53.693593  291455 cri.go:89] found id: ""
	I1212 01:40:53.693620  291455 logs.go:282] 0 containers: []
	W1212 01:40:53.693629  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:53.693635  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:53.693693  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:53.717209  291455 cri.go:89] found id: ""
	I1212 01:40:53.717234  291455 logs.go:282] 0 containers: []
	W1212 01:40:53.717243  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:53.717249  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:53.717305  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:53.742008  291455 cri.go:89] found id: ""
	I1212 01:40:53.742033  291455 logs.go:282] 0 containers: []
	W1212 01:40:53.742042  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:53.742049  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:53.742106  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:53.766463  291455 cri.go:89] found id: ""
	I1212 01:40:53.766489  291455 logs.go:282] 0 containers: []
	W1212 01:40:53.766498  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:53.766505  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:53.766562  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:53.832090  291455 cri.go:89] found id: ""
	I1212 01:40:53.832118  291455 logs.go:282] 0 containers: []
	W1212 01:40:53.832133  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:53.832140  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:53.832201  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:53.877395  291455 cri.go:89] found id: ""
	I1212 01:40:53.877422  291455 logs.go:282] 0 containers: []
	W1212 01:40:53.877431  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:53.877438  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:53.877497  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:53.905857  291455 cri.go:89] found id: ""
	I1212 01:40:53.905883  291455 logs.go:282] 0 containers: []
	W1212 01:40:53.905891  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:53.905900  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:53.905912  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:53.936211  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:53.936236  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:53.990768  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:53.990801  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:54.005707  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:54.005751  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:54.077323  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:54.068912   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:54.069627   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:54.071278   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:54.071804   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:54.073346   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:54.068912   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:54.069627   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:54.071278   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:54.071804   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:54.073346   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:54.077345  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:54.077361  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:56.603783  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:56.614362  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:56.614437  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:56.639205  291455 cri.go:89] found id: ""
	I1212 01:40:56.639230  291455 logs.go:282] 0 containers: []
	W1212 01:40:56.639239  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:56.639245  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:56.639302  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:56.664961  291455 cri.go:89] found id: ""
	I1212 01:40:56.664983  291455 logs.go:282] 0 containers: []
	W1212 01:40:56.664991  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:56.664997  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:56.665055  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:56.689125  291455 cri.go:89] found id: ""
	I1212 01:40:56.689148  291455 logs.go:282] 0 containers: []
	W1212 01:40:56.689163  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:56.689169  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:56.689228  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:56.713944  291455 cri.go:89] found id: ""
	I1212 01:40:56.713969  291455 logs.go:282] 0 containers: []
	W1212 01:40:56.713977  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:56.713984  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:56.714045  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:56.742503  291455 cri.go:89] found id: ""
	I1212 01:40:56.742536  291455 logs.go:282] 0 containers: []
	W1212 01:40:56.742546  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:56.742552  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:56.742610  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:56.768074  291455 cri.go:89] found id: ""
	I1212 01:40:56.768101  291455 logs.go:282] 0 containers: []
	W1212 01:40:56.768110  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:56.768116  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:56.768176  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:56.822219  291455 cri.go:89] found id: ""
	I1212 01:40:56.822241  291455 logs.go:282] 0 containers: []
	W1212 01:40:56.822250  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:56.822256  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:56.822326  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:56.877551  291455 cri.go:89] found id: ""
	I1212 01:40:56.877579  291455 logs.go:282] 0 containers: []
	W1212 01:40:56.877588  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:56.877598  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:56.877609  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:56.951400  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:56.942725   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:56.943403   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:56.945223   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:56.945864   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:56.947463   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:56.942725   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:56.943403   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:56.945223   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:56.945864   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:56.947463   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:56.951423  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:56.951435  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:56.976432  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:56.976471  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:57.016067  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:57.016095  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:57.076530  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:57.076562  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:59.590650  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:59.601442  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:59.601513  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:59.627392  291455 cri.go:89] found id: ""
	I1212 01:40:59.627418  291455 logs.go:282] 0 containers: []
	W1212 01:40:59.627426  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:59.627433  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:59.627492  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:59.652525  291455 cri.go:89] found id: ""
	I1212 01:40:59.652546  291455 logs.go:282] 0 containers: []
	W1212 01:40:59.652555  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:59.652560  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:59.652620  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:59.677515  291455 cri.go:89] found id: ""
	I1212 01:40:59.677538  291455 logs.go:282] 0 containers: []
	W1212 01:40:59.677546  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:59.677551  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:59.677609  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:59.701508  291455 cri.go:89] found id: ""
	I1212 01:40:59.701531  291455 logs.go:282] 0 containers: []
	W1212 01:40:59.701539  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:59.701545  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:59.701602  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:59.726132  291455 cri.go:89] found id: ""
	I1212 01:40:59.726154  291455 logs.go:282] 0 containers: []
	W1212 01:40:59.726162  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:59.726168  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:59.726228  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:59.751581  291455 cri.go:89] found id: ""
	I1212 01:40:59.751608  291455 logs.go:282] 0 containers: []
	W1212 01:40:59.751617  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:59.751625  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:59.751682  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:59.780780  291455 cri.go:89] found id: ""
	I1212 01:40:59.780805  291455 logs.go:282] 0 containers: []
	W1212 01:40:59.780825  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:59.780836  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:59.780901  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:59.866401  291455 cri.go:89] found id: ""
	I1212 01:40:59.866424  291455 logs.go:282] 0 containers: []
	W1212 01:40:59.866433  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:59.866442  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:59.866453  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:59.921825  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:59.921862  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:59.935338  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:59.935366  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:59.999474  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:59.992159   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:59.992558   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:59.993995   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:59.994293   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:59.995686   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:59.992159   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:59.992558   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:59.993995   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:59.994293   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:59.995686   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:59.999546  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:59.999574  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:41:00.079868  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:41:00.084769  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:41:02.719157  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:41:02.730262  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:41:02.730335  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:41:02.756172  291455 cri.go:89] found id: ""
	I1212 01:41:02.756196  291455 logs.go:282] 0 containers: []
	W1212 01:41:02.756206  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:41:02.756213  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:41:02.756272  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:41:02.792420  291455 cri.go:89] found id: ""
	I1212 01:41:02.792445  291455 logs.go:282] 0 containers: []
	W1212 01:41:02.792455  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:41:02.792461  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:41:02.792531  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:41:02.838813  291455 cri.go:89] found id: ""
	I1212 01:41:02.838841  291455 logs.go:282] 0 containers: []
	W1212 01:41:02.838849  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:41:02.838856  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:41:02.838918  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:41:02.886478  291455 cri.go:89] found id: ""
	I1212 01:41:02.886504  291455 logs.go:282] 0 containers: []
	W1212 01:41:02.886513  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:41:02.886523  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:41:02.886580  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:41:02.914286  291455 cri.go:89] found id: ""
	I1212 01:41:02.914309  291455 logs.go:282] 0 containers: []
	W1212 01:41:02.914318  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:41:02.914333  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:41:02.914403  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:41:02.939527  291455 cri.go:89] found id: ""
	I1212 01:41:02.939550  291455 logs.go:282] 0 containers: []
	W1212 01:41:02.939559  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:41:02.939565  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:41:02.939624  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:41:02.965321  291455 cri.go:89] found id: ""
	I1212 01:41:02.965345  291455 logs.go:282] 0 containers: []
	W1212 01:41:02.965354  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:41:02.965360  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:41:02.965423  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:41:02.991292  291455 cri.go:89] found id: ""
	I1212 01:41:02.991316  291455 logs.go:282] 0 containers: []
	W1212 01:41:02.991325  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:41:02.991341  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:41:02.991352  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:41:03.019527  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:41:03.019562  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:41:03.051852  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:41:03.051878  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:41:03.107633  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:41:03.107667  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:41:03.121349  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:41:03.121375  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:41:03.186261  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:41:03.177889   12737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:03.178763   12737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:03.180270   12737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:03.180822   12737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:03.182351   12737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:41:03.177889   12737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:03.178763   12737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:03.180270   12737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:03.180822   12737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:03.182351   12737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:41:05.687947  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:41:05.698808  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:41:05.698883  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:41:05.724019  291455 cri.go:89] found id: ""
	I1212 01:41:05.724043  291455 logs.go:282] 0 containers: []
	W1212 01:41:05.724052  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:41:05.724058  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:41:05.724115  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:41:05.752813  291455 cri.go:89] found id: ""
	I1212 01:41:05.752838  291455 logs.go:282] 0 containers: []
	W1212 01:41:05.752847  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:41:05.752853  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:41:05.752917  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:41:05.777122  291455 cri.go:89] found id: ""
	I1212 01:41:05.777144  291455 logs.go:282] 0 containers: []
	W1212 01:41:05.777152  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:41:05.777158  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:41:05.777215  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:41:05.833235  291455 cri.go:89] found id: ""
	I1212 01:41:05.833260  291455 logs.go:282] 0 containers: []
	W1212 01:41:05.833270  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:41:05.833276  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:41:05.833350  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:41:05.880483  291455 cri.go:89] found id: ""
	I1212 01:41:05.880506  291455 logs.go:282] 0 containers: []
	W1212 01:41:05.880514  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:41:05.880520  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:41:05.880583  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:41:05.904810  291455 cri.go:89] found id: ""
	I1212 01:41:05.904834  291455 logs.go:282] 0 containers: []
	W1212 01:41:05.904843  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:41:05.904849  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:41:05.904906  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:41:05.936458  291455 cri.go:89] found id: ""
	I1212 01:41:05.936482  291455 logs.go:282] 0 containers: []
	W1212 01:41:05.936491  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:41:05.936497  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:41:05.936585  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:41:05.965168  291455 cri.go:89] found id: ""
	I1212 01:41:05.965193  291455 logs.go:282] 0 containers: []
	W1212 01:41:05.965202  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:41:05.965212  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:41:05.965225  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:41:06.022621  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:41:06.022674  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:41:06.036897  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:41:06.036926  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:41:06.105481  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:41:06.097089   12840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:06.097938   12840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:06.099584   12840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:06.099907   12840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:06.101467   12840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:41:06.097089   12840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:06.097938   12840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:06.099584   12840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:06.099907   12840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:06.101467   12840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:41:06.105505  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:41:06.105518  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:41:06.131153  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:41:06.131186  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:41:08.659864  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:41:08.670811  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:41:08.670881  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:41:08.694882  291455 cri.go:89] found id: ""
	I1212 01:41:08.694903  291455 logs.go:282] 0 containers: []
	W1212 01:41:08.694911  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:41:08.694917  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:41:08.694976  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:41:08.719560  291455 cri.go:89] found id: ""
	I1212 01:41:08.719590  291455 logs.go:282] 0 containers: []
	W1212 01:41:08.719598  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:41:08.719605  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:41:08.719662  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:41:08.744076  291455 cri.go:89] found id: ""
	I1212 01:41:08.744103  291455 logs.go:282] 0 containers: []
	W1212 01:41:08.744113  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:41:08.744119  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:41:08.744177  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:41:08.772960  291455 cri.go:89] found id: ""
	I1212 01:41:08.772985  291455 logs.go:282] 0 containers: []
	W1212 01:41:08.772994  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:41:08.773001  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:41:08.773080  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:41:08.815633  291455 cri.go:89] found id: ""
	I1212 01:41:08.815659  291455 logs.go:282] 0 containers: []
	W1212 01:41:08.815668  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:41:08.815674  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:41:08.815742  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:41:08.878320  291455 cri.go:89] found id: ""
	I1212 01:41:08.878345  291455 logs.go:282] 0 containers: []
	W1212 01:41:08.878353  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:41:08.878360  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:41:08.878450  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:41:08.904601  291455 cri.go:89] found id: ""
	I1212 01:41:08.904628  291455 logs.go:282] 0 containers: []
	W1212 01:41:08.904636  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:41:08.904643  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:41:08.904702  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:41:08.929638  291455 cri.go:89] found id: ""
	I1212 01:41:08.929660  291455 logs.go:282] 0 containers: []
	W1212 01:41:08.929668  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:41:08.929678  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:41:08.929689  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:41:08.987700  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:41:08.987732  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:41:09.006748  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:41:09.006844  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:41:09.074571  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:41:09.066680   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:09.067299   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:09.068802   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:09.069203   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:09.070675   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:41:09.066680   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:09.067299   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:09.068802   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:09.069203   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:09.070675   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:41:09.074595  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:41:09.074607  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:41:09.099568  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:41:09.099599  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:41:11.629539  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:41:11.640012  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:41:11.640082  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:41:11.663460  291455 cri.go:89] found id: ""
	I1212 01:41:11.663485  291455 logs.go:282] 0 containers: []
	W1212 01:41:11.663493  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:41:11.663500  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:41:11.663555  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:41:11.686956  291455 cri.go:89] found id: ""
	I1212 01:41:11.686978  291455 logs.go:282] 0 containers: []
	W1212 01:41:11.686986  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:41:11.687088  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:41:11.687150  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:41:11.712890  291455 cri.go:89] found id: ""
	I1212 01:41:11.712913  291455 logs.go:282] 0 containers: []
	W1212 01:41:11.712922  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:41:11.712928  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:41:11.712984  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:41:11.736706  291455 cri.go:89] found id: ""
	I1212 01:41:11.736728  291455 logs.go:282] 0 containers: []
	W1212 01:41:11.736736  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:41:11.736742  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:41:11.736800  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:41:11.759893  291455 cri.go:89] found id: ""
	I1212 01:41:11.759915  291455 logs.go:282] 0 containers: []
	W1212 01:41:11.759923  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:41:11.759929  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:41:11.759986  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:41:11.794524  291455 cri.go:89] found id: ""
	I1212 01:41:11.794548  291455 logs.go:282] 0 containers: []
	W1212 01:41:11.794556  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:41:11.794563  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:41:11.794617  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:41:11.837664  291455 cri.go:89] found id: ""
	I1212 01:41:11.837685  291455 logs.go:282] 0 containers: []
	W1212 01:41:11.837693  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:41:11.837699  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:41:11.837758  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:41:11.876539  291455 cri.go:89] found id: ""
	I1212 01:41:11.876560  291455 logs.go:282] 0 containers: []
	W1212 01:41:11.876568  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:41:11.876576  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:41:11.876588  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:41:11.891935  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:41:11.891958  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:41:11.953883  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:41:11.945499   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:11.946165   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:11.947829   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:11.948378   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:11.949885   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:41:11.945499   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:11.946165   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:11.947829   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:11.948378   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:11.949885   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:41:11.953906  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:41:11.953919  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:41:11.978361  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:41:11.978394  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:41:12.008436  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:41:12.008467  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:41:14.566794  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:41:14.577540  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:41:14.577620  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:41:14.603419  291455 cri.go:89] found id: ""
	I1212 01:41:14.603444  291455 logs.go:282] 0 containers: []
	W1212 01:41:14.603453  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:41:14.603459  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:41:14.603523  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:41:14.627963  291455 cri.go:89] found id: ""
	I1212 01:41:14.627986  291455 logs.go:282] 0 containers: []
	W1212 01:41:14.627994  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:41:14.628000  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:41:14.628064  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:41:14.651989  291455 cri.go:89] found id: ""
	I1212 01:41:14.652014  291455 logs.go:282] 0 containers: []
	W1212 01:41:14.652024  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:41:14.652031  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:41:14.652089  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:41:14.680771  291455 cri.go:89] found id: ""
	I1212 01:41:14.680794  291455 logs.go:282] 0 containers: []
	W1212 01:41:14.680802  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:41:14.680808  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:41:14.680865  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:41:14.705454  291455 cri.go:89] found id: ""
	I1212 01:41:14.705479  291455 logs.go:282] 0 containers: []
	W1212 01:41:14.705488  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:41:14.705494  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:41:14.705553  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:41:14.734181  291455 cri.go:89] found id: ""
	I1212 01:41:14.734207  291455 logs.go:282] 0 containers: []
	W1212 01:41:14.734216  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:41:14.734222  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:41:14.734279  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:41:14.758125  291455 cri.go:89] found id: ""
	I1212 01:41:14.758150  291455 logs.go:282] 0 containers: []
	W1212 01:41:14.758159  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:41:14.758165  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:41:14.758224  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:41:14.796212  291455 cri.go:89] found id: ""
	I1212 01:41:14.796239  291455 logs.go:282] 0 containers: []
	W1212 01:41:14.796248  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:41:14.796257  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:41:14.796268  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:41:14.875942  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:41:14.875982  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:41:14.893694  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:41:14.893723  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:41:14.958664  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:41:14.950439   13182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:14.951146   13182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:14.952867   13182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:14.953336   13182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:14.954860   13182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:41:14.950439   13182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:14.951146   13182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:14.952867   13182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:14.953336   13182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:14.954860   13182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:41:14.958686  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:41:14.958698  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:41:14.983555  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:41:14.983592  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:41:17.522313  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:41:17.532817  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:41:17.532892  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:41:17.560757  291455 cri.go:89] found id: ""
	I1212 01:41:17.560779  291455 logs.go:282] 0 containers: []
	W1212 01:41:17.560788  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:41:17.560795  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:41:17.560851  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:41:17.585702  291455 cri.go:89] found id: ""
	I1212 01:41:17.585725  291455 logs.go:282] 0 containers: []
	W1212 01:41:17.585734  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:41:17.585740  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:41:17.585807  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:41:17.614888  291455 cri.go:89] found id: ""
	I1212 01:41:17.614912  291455 logs.go:282] 0 containers: []
	W1212 01:41:17.614920  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:41:17.614926  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:41:17.614983  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:41:17.640684  291455 cri.go:89] found id: ""
	I1212 01:41:17.640706  291455 logs.go:282] 0 containers: []
	W1212 01:41:17.640714  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:41:17.640721  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:41:17.640781  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:41:17.666504  291455 cri.go:89] found id: ""
	I1212 01:41:17.666529  291455 logs.go:282] 0 containers: []
	W1212 01:41:17.666538  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:41:17.666545  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:41:17.666619  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:41:17.693636  291455 cri.go:89] found id: ""
	I1212 01:41:17.693661  291455 logs.go:282] 0 containers: []
	W1212 01:41:17.693670  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:41:17.693677  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:41:17.693738  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:41:17.718203  291455 cri.go:89] found id: ""
	I1212 01:41:17.718270  291455 logs.go:282] 0 containers: []
	W1212 01:41:17.718310  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:41:17.718337  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:41:17.718430  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:41:17.745520  291455 cri.go:89] found id: ""
	I1212 01:41:17.745544  291455 logs.go:282] 0 containers: []
	W1212 01:41:17.745553  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:41:17.745562  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:41:17.745574  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:41:17.809137  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:41:17.809237  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:41:17.824842  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:41:17.824909  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:41:17.914329  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:41:17.905491   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:17.906027   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:17.907410   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:17.907912   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:17.909473   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:41:17.905491   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:17.906027   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:17.907410   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:17.907912   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:17.909473   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:41:17.914350  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:41:17.914365  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:41:17.939510  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:41:17.939546  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:41:20.466980  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:41:20.480747  291455 out.go:203] 
	W1212 01:41:20.483558  291455 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1212 01:41:20.483596  291455 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1212 01:41:20.483610  291455 out.go:285] * Related issues:
	* Related issues:
	W1212 01:41:20.483628  291455 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W1212 01:41:20.483644  291455 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I1212 01:41:20.486471  291455 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p newest-cni-256959 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 105
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-256959
helpers_test.go:244: (dbg) docker inspect newest-cni-256959:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "361f9c16c44aa15a36ece6d69f387f89ceb140e0cb337e6926bbb4b89286930b",
	        "Created": "2025-12-12T01:25:15.433462291Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 291584,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T01:35:11.599618298Z",
	            "FinishedAt": "2025-12-12T01:35:10.241180563Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/361f9c16c44aa15a36ece6d69f387f89ceb140e0cb337e6926bbb4b89286930b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/361f9c16c44aa15a36ece6d69f387f89ceb140e0cb337e6926bbb4b89286930b/hostname",
	        "HostsPath": "/var/lib/docker/containers/361f9c16c44aa15a36ece6d69f387f89ceb140e0cb337e6926bbb4b89286930b/hosts",
	        "LogPath": "/var/lib/docker/containers/361f9c16c44aa15a36ece6d69f387f89ceb140e0cb337e6926bbb4b89286930b/361f9c16c44aa15a36ece6d69f387f89ceb140e0cb337e6926bbb4b89286930b-json.log",
	        "Name": "/newest-cni-256959",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-256959:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-256959",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "361f9c16c44aa15a36ece6d69f387f89ceb140e0cb337e6926bbb4b89286930b",
	                "LowerDir": "/var/lib/docker/overlay2/46aaa938e663ba5fb2a5ebb73f99ff9b7f70d7fe540cd86b216b975394456017-init/diff:/var/lib/docker/overlay2/4c0e5370e4fd7b4e6c6a79620ef377d7d55826709cd277e0cfa49c6005af0314/diff",
	                "MergedDir": "/var/lib/docker/overlay2/46aaa938e663ba5fb2a5ebb73f99ff9b7f70d7fe540cd86b216b975394456017/merged",
	                "UpperDir": "/var/lib/docker/overlay2/46aaa938e663ba5fb2a5ebb73f99ff9b7f70d7fe540cd86b216b975394456017/diff",
	                "WorkDir": "/var/lib/docker/overlay2/46aaa938e663ba5fb2a5ebb73f99ff9b7f70d7fe540cd86b216b975394456017/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-256959",
	                "Source": "/var/lib/docker/volumes/newest-cni-256959/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-256959",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-256959",
	                "name.minikube.sigs.k8s.io": "newest-cni-256959",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "345adc76212ae94224c61dd049e472f16ee67ee027a331e11cdf648a15dff74a",
	            "SandboxKey": "/var/run/docker/netns/345adc76212a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-256959": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:19:c4:dc:e5:59",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "08d9e23f02a4d7730d420d79f658bc1854aa3d62ee2a54a8cd34a455b2ba0431",
	                    "EndpointID": "e780ab70cd5a9e96f54f2a272324b26b9e51bece9b706db46ac5aff93fb5ac56",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-256959",
	                        "361f9c16c44a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-256959 -n newest-cni-256959
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-256959 -n newest-cni-256959: exit status 2 (337.499304ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-256959 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-256959 logs -n 25: (1.617450351s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ image   │ default-k8s-diff-port-971096 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ pause   │ -p default-k8s-diff-port-971096 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ unpause │ -p default-k8s-diff-port-971096 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ delete  │ -p default-k8s-diff-port-971096                                                                                                                                                                                                                            │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ delete  │ -p default-k8s-diff-port-971096                                                                                                                                                                                                                            │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ delete  │ -p disable-driver-mounts-539158                                                                                                                                                                                                                            │ disable-driver-mounts-539158 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ start   │ -p no-preload-361053 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-361053            │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-648696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ stop    │ -p embed-certs-648696 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ addons  │ enable dashboard -p embed-certs-648696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ start   │ -p embed-certs-648696 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:24 UTC │
	│ image   │ embed-certs-648696 image list --format=json                                                                                                                                                                                                                │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ pause   │ -p embed-certs-648696 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ unpause │ -p embed-certs-648696 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ delete  │ -p embed-certs-648696                                                                                                                                                                                                                                      │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ delete  │ -p embed-certs-648696                                                                                                                                                                                                                                      │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ start   │ -p newest-cni-256959 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-256959            │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-361053 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-361053            │ jenkins │ v1.37.0 │ 12 Dec 25 01:31 UTC │                     │
	│ stop    │ -p no-preload-361053 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-361053            │ jenkins │ v1.37.0 │ 12 Dec 25 01:33 UTC │ 12 Dec 25 01:33 UTC │
	│ addons  │ enable dashboard -p no-preload-361053 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-361053            │ jenkins │ v1.37.0 │ 12 Dec 25 01:33 UTC │ 12 Dec 25 01:33 UTC │
	│ start   │ -p no-preload-361053 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-361053            │ jenkins │ v1.37.0 │ 12 Dec 25 01:33 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-256959 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-256959            │ jenkins │ v1.37.0 │ 12 Dec 25 01:33 UTC │                     │
	│ stop    │ -p newest-cni-256959 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-256959            │ jenkins │ v1.37.0 │ 12 Dec 25 01:35 UTC │ 12 Dec 25 01:35 UTC │
	│ addons  │ enable dashboard -p newest-cni-256959 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-256959            │ jenkins │ v1.37.0 │ 12 Dec 25 01:35 UTC │ 12 Dec 25 01:35 UTC │
	│ start   │ -p newest-cni-256959 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-256959            │ jenkins │ v1.37.0 │ 12 Dec 25 01:35 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 01:35:11
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 01:35:11.336080  291455 out.go:360] Setting OutFile to fd 1 ...
	I1212 01:35:11.336277  291455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:35:11.336290  291455 out.go:374] Setting ErrFile to fd 2...
	I1212 01:35:11.336296  291455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:35:11.336566  291455 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	I1212 01:35:11.336950  291455 out.go:368] Setting JSON to false
	I1212 01:35:11.337843  291455 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8258,"bootTime":1765495054,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1212 01:35:11.337913  291455 start.go:143] virtualization:  
	I1212 01:35:11.341103  291455 out.go:179] * [newest-cni-256959] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 01:35:11.345273  291455 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 01:35:11.345376  291455 notify.go:221] Checking for updates...
	I1212 01:35:11.351231  291455 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 01:35:11.354134  291455 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 01:35:11.357086  291455 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	I1212 01:35:11.359981  291455 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 01:35:11.363090  291455 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 01:35:11.366381  291455 config.go:182] Loaded profile config "newest-cni-256959": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 01:35:11.367076  291455 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 01:35:11.397719  291455 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 01:35:11.397845  291455 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:35:11.450218  291455 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 01:35:11.441400779 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:35:11.450324  291455 docker.go:319] overlay module found
	I1212 01:35:11.453495  291455 out.go:179] * Using the docker driver based on existing profile
	I1212 01:35:11.456257  291455 start.go:309] selected driver: docker
	I1212 01:35:11.456272  291455 start.go:927] validating driver "docker" against &{Name:newest-cni-256959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:35:11.456385  291455 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 01:35:11.457105  291455 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:35:11.512167  291455 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 01:35:11.503270098 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:35:11.512501  291455 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1212 01:35:11.512533  291455 cni.go:84] Creating CNI manager for ""
	I1212 01:35:11.512581  291455 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 01:35:11.512620  291455 start.go:353] cluster config:
	{Name:newest-cni-256959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:35:11.517595  291455 out.go:179] * Starting "newest-cni-256959" primary control-plane node in "newest-cni-256959" cluster
	I1212 01:35:11.520355  291455 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1212 01:35:11.523510  291455 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 01:35:11.526310  291455 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 01:35:11.526350  291455 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1212 01:35:11.526380  291455 cache.go:65] Caching tarball of preloaded images
	I1212 01:35:11.526401  291455 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 01:35:11.526463  291455 preload.go:238] Found /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1212 01:35:11.526474  291455 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1212 01:35:11.526577  291455 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/config.json ...
	I1212 01:35:11.545949  291455 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 01:35:11.545972  291455 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 01:35:11.545990  291455 cache.go:243] Successfully downloaded all kic artifacts
	I1212 01:35:11.546021  291455 start.go:360] acquireMachinesLock for newest-cni-256959: {Name:mke4c35c218ad59b1da2c46074b57e71134fc7be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:35:11.546106  291455 start.go:364] duration metric: took 61.449µs to acquireMachinesLock for "newest-cni-256959"
	I1212 01:35:11.546128  291455 start.go:96] Skipping create...Using existing machine configuration
	I1212 01:35:11.546140  291455 fix.go:54] fixHost starting: 
	I1212 01:35:11.546394  291455 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Status}}
	I1212 01:35:11.562986  291455 fix.go:112] recreateIfNeeded on newest-cni-256959: state=Stopped err=<nil>
	W1212 01:35:11.563044  291455 fix.go:138] unexpected machine state, will restart: <nil>
	W1212 01:35:12.535792  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:35:12.641222  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:35:12.704850  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:35:12.704951  287206 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 01:35:12.708213  287206 out.go:179] * Enabled addons: 
	I1212 01:35:12.711265  287206 addons.go:530] duration metric: took 1m55.054971797s for enable addons: enabled=[]
	W1212 01:35:14.536558  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:35:11.566225  291455 out.go:252] * Restarting existing docker container for "newest-cni-256959" ...
	I1212 01:35:11.566307  291455 cli_runner.go:164] Run: docker start newest-cni-256959
	I1212 01:35:11.824711  291455 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Status}}
	I1212 01:35:11.850549  291455 kic.go:430] container "newest-cni-256959" state is running.
	I1212 01:35:11.850948  291455 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-256959
	I1212 01:35:11.874496  291455 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/config.json ...
	I1212 01:35:11.875491  291455 machine.go:94] provisionDockerMachine start ...
	I1212 01:35:11.875566  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:11.904543  291455 main.go:143] libmachine: Using SSH client type: native
	I1212 01:35:11.904867  291455 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1212 01:35:11.904894  291455 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 01:35:11.905649  291455 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 01:35:15.062841  291455 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-256959
	
	I1212 01:35:15.062884  291455 ubuntu.go:182] provisioning hostname "newest-cni-256959"
	I1212 01:35:15.062966  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:15.081374  291455 main.go:143] libmachine: Using SSH client type: native
	I1212 01:35:15.081715  291455 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1212 01:35:15.081732  291455 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-256959 && echo "newest-cni-256959" | sudo tee /etc/hostname
	I1212 01:35:15.244594  291455 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-256959
	
	I1212 01:35:15.244717  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:15.262885  291455 main.go:143] libmachine: Using SSH client type: native
	I1212 01:35:15.263226  291455 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1212 01:35:15.263249  291455 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-256959' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-256959/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-256959' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:35:15.415381  291455 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:35:15.415407  291455 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-2343/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-2343/.minikube}
	I1212 01:35:15.415450  291455 ubuntu.go:190] setting up certificates
	I1212 01:35:15.415469  291455 provision.go:84] configureAuth start
	I1212 01:35:15.415542  291455 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-256959
	I1212 01:35:15.432184  291455 provision.go:143] copyHostCerts
	I1212 01:35:15.432260  291455 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem, removing ...
	I1212 01:35:15.432274  291455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem
	I1212 01:35:15.432771  291455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem (1082 bytes)
	I1212 01:35:15.432891  291455 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem, removing ...
	I1212 01:35:15.432905  291455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem
	I1212 01:35:15.432935  291455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem (1123 bytes)
	I1212 01:35:15.433008  291455 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem, removing ...
	I1212 01:35:15.433018  291455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem
	I1212 01:35:15.433044  291455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem (1675 bytes)
	I1212 01:35:15.433100  291455 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem org=jenkins.newest-cni-256959 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-256959]
	I1212 01:35:15.664957  291455 provision.go:177] copyRemoteCerts
	I1212 01:35:15.665025  291455 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:35:15.665084  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:15.682010  291455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:35:15.786690  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 01:35:15.804464  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 01:35:15.821597  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 01:35:15.838753  291455 provision.go:87] duration metric: took 423.263374ms to configureAuth
	I1212 01:35:15.838782  291455 ubuntu.go:206] setting minikube options for container-runtime
	I1212 01:35:15.839040  291455 config.go:182] Loaded profile config "newest-cni-256959": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 01:35:15.839053  291455 machine.go:97] duration metric: took 3.963544394s to provisionDockerMachine
	I1212 01:35:15.839061  291455 start.go:293] postStartSetup for "newest-cni-256959" (driver="docker")
	I1212 01:35:15.839072  291455 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:35:15.839119  291455 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:35:15.839169  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:15.855712  291455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:35:15.959303  291455 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:35:15.962341  291455 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 01:35:15.962368  291455 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 01:35:15.962380  291455 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/addons for local assets ...
	I1212 01:35:15.962429  291455 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/files for local assets ...
	I1212 01:35:15.962509  291455 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem -> 42902.pem in /etc/ssl/certs
	I1212 01:35:15.962609  291455 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:35:15.969472  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /etc/ssl/certs/42902.pem (1708 bytes)
	I1212 01:35:15.986194  291455 start.go:296] duration metric: took 147.119175ms for postStartSetup
	I1212 01:35:15.986304  291455 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 01:35:15.986375  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:16.005019  291455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:35:16.107859  291455 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 01:35:16.112663  291455 fix.go:56] duration metric: took 4.566516262s for fixHost
	I1212 01:35:16.112691  291455 start.go:83] releasing machines lock for "newest-cni-256959", held for 4.566573288s
	I1212 01:35:16.112760  291455 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-256959
	I1212 01:35:16.129477  291455 ssh_runner.go:195] Run: cat /version.json
	I1212 01:35:16.129531  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:16.129775  291455 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:35:16.129824  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:16.153158  291455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:35:16.155921  291455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:35:16.367474  291455 ssh_runner.go:195] Run: systemctl --version
	I1212 01:35:16.373832  291455 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:35:16.378022  291455 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:35:16.378104  291455 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:35:16.385747  291455 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 01:35:16.385772  291455 start.go:496] detecting cgroup driver to use...
	I1212 01:35:16.385819  291455 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 01:35:16.385882  291455 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 01:35:16.403657  291455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 01:35:16.417469  291455 docker.go:218] disabling cri-docker service (if available) ...
	I1212 01:35:16.417564  291455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:35:16.433612  291455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:35:16.446861  291455 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:35:16.554018  291455 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:35:16.672193  291455 docker.go:234] disabling docker service ...
	I1212 01:35:16.672283  291455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:35:16.687238  291455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:35:16.700659  291455 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:35:16.812563  291455 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:35:16.928270  291455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:35:16.941185  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:35:16.957067  291455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 01:35:16.966276  291455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 01:35:16.975221  291455 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 01:35:16.975292  291455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 01:35:16.984294  291455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 01:35:16.993328  291455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 01:35:17.004796  291455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 01:35:17.015289  291455 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:35:17.023922  291455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 01:35:17.036658  291455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 01:35:17.046732  291455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 01:35:17.056354  291455 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:35:17.064063  291455 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:35:17.071833  291455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:35:17.188012  291455 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 01:35:17.306110  291455 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1212 01:35:17.306231  291455 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1212 01:35:17.309882  291455 start.go:564] Will wait 60s for crictl version
	I1212 01:35:17.309968  291455 ssh_runner.go:195] Run: which crictl
	I1212 01:35:17.313475  291455 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 01:35:17.340045  291455 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1212 01:35:17.340140  291455 ssh_runner.go:195] Run: containerd --version
	I1212 01:35:17.360301  291455 ssh_runner.go:195] Run: containerd --version
	I1212 01:35:17.385714  291455 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1212 01:35:17.388490  291455 cli_runner.go:164] Run: docker network inspect newest-cni-256959 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 01:35:17.404979  291455 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1212 01:35:17.409350  291455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:35:17.422610  291455 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1212 01:35:17.425426  291455 kubeadm.go:884] updating cluster {Name:newest-cni-256959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:35:17.425578  291455 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 01:35:17.425675  291455 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:35:17.450191  291455 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 01:35:17.450217  291455 containerd.go:534] Images already preloaded, skipping extraction
	I1212 01:35:17.450277  291455 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:35:17.474185  291455 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 01:35:17.474220  291455 cache_images.go:86] Images are preloaded, skipping loading
	I1212 01:35:17.474228  291455 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1212 01:35:17.474373  291455 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-256959 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:35:17.474472  291455 ssh_runner.go:195] Run: sudo crictl info
	I1212 01:35:17.498662  291455 cni.go:84] Creating CNI manager for ""
	I1212 01:35:17.498685  291455 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 01:35:17.498869  291455 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1212 01:35:17.498905  291455 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-256959 NodeName:newest-cni-256959 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 01:35:17.499182  291455 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-256959"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:35:17.499276  291455 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 01:35:17.511920  291455 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 01:35:17.512017  291455 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:35:17.519602  291455 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1212 01:35:17.532107  291455 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 01:35:17.545262  291455 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1212 01:35:17.557618  291455 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1212 01:35:17.561053  291455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:35:17.570894  291455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:35:17.675958  291455 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:35:17.692695  291455 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959 for IP: 192.168.76.2
	I1212 01:35:17.692715  291455 certs.go:195] generating shared ca certs ...
	I1212 01:35:17.692750  291455 certs.go:227] acquiring lock for ca certs: {Name:mk18ed2fce74cbc4ee01c0f71e2dbdd98ccce1cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:35:17.692911  291455 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key
	I1212 01:35:17.692980  291455 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key
	I1212 01:35:17.692995  291455 certs.go:257] generating profile certs ...
	I1212 01:35:17.693112  291455 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/client.key
	I1212 01:35:17.693202  291455 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.key.b05ecb93
	I1212 01:35:17.693309  291455 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.key
	I1212 01:35:17.693447  291455 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem (1338 bytes)
	W1212 01:35:17.693518  291455 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290_empty.pem, impossibly tiny 0 bytes
	I1212 01:35:17.693536  291455 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 01:35:17.693582  291455 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem (1082 bytes)
	I1212 01:35:17.693632  291455 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:35:17.693666  291455 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem (1675 bytes)
	I1212 01:35:17.693747  291455 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem (1708 bytes)
	I1212 01:35:17.694397  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:35:17.712974  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 01:35:17.738035  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:35:17.758905  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:35:17.776423  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 01:35:17.805243  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 01:35:17.826665  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:35:17.847012  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 01:35:17.868946  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:35:17.887272  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem --> /usr/share/ca-certificates/4290.pem (1338 bytes)
	I1212 01:35:17.904023  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /usr/share/ca-certificates/42902.pem (1708 bytes)
	I1212 01:35:17.920802  291455 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:35:17.933645  291455 ssh_runner.go:195] Run: openssl version
	I1212 01:35:17.939797  291455 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:35:17.946909  291455 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 01:35:17.954537  291455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:35:17.958217  291455 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:35:17.958301  291455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:35:17.998878  291455 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 01:35:18.008093  291455 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4290.pem
	I1212 01:35:18.016725  291455 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4290.pem /etc/ssl/certs/4290.pem
	I1212 01:35:18.025237  291455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4290.pem
	I1212 01:35:18.029387  291455 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:06 /usr/share/ca-certificates/4290.pem
	I1212 01:35:18.029458  291455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4290.pem
	I1212 01:35:18.072423  291455 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 01:35:18.080329  291455 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42902.pem
	I1212 01:35:18.088043  291455 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42902.pem /etc/ssl/certs/42902.pem
	I1212 01:35:18.095703  291455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42902.pem
	I1212 01:35:18.100065  291455 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:06 /usr/share/ca-certificates/42902.pem
	I1212 01:35:18.100135  291455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42902.pem
	I1212 01:35:18.141016  291455 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 01:35:18.148423  291455 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:35:18.152541  291455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 01:35:18.195372  291455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 01:35:18.236073  291455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 01:35:18.276924  291455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 01:35:18.317697  291455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 01:35:18.358213  291455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 01:35:18.400083  291455 kubeadm.go:401] StartCluster: {Name:newest-cni-256959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:35:18.400177  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1212 01:35:18.400236  291455 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:35:18.437669  291455 cri.go:89] found id: ""
	I1212 01:35:18.437744  291455 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:35:18.446134  291455 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 01:35:18.446156  291455 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 01:35:18.446208  291455 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 01:35:18.453928  291455 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 01:35:18.454522  291455 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-256959" does not appear in /home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 01:35:18.454766  291455 kubeconfig.go:62] /home/jenkins/minikube-integration/22101-2343/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-256959" cluster setting kubeconfig missing "newest-cni-256959" context setting]
	I1212 01:35:18.455226  291455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/kubeconfig: {Name:mk3e2cbffa089f0a26351f5f6a49b8ed3b66edd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:35:18.456674  291455 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 01:35:18.464597  291455 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1212 01:35:18.464630  291455 kubeadm.go:602] duration metric: took 18.46826ms to restartPrimaryControlPlane
	I1212 01:35:18.464640  291455 kubeadm.go:403] duration metric: took 64.568702ms to StartCluster
	I1212 01:35:18.464656  291455 settings.go:142] acquiring lock: {Name:mk6dd4250df69aeba4752e9f33aeef37272375c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:35:18.464716  291455 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 01:35:18.465619  291455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/kubeconfig: {Name:mk3e2cbffa089f0a26351f5f6a49b8ed3b66edd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:35:18.465827  291455 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1212 01:35:18.466211  291455 config.go:182] Loaded profile config "newest-cni-256959": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 01:35:18.466236  291455 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 01:35:18.466355  291455 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-256959"
	I1212 01:35:18.466367  291455 addons.go:70] Setting dashboard=true in profile "newest-cni-256959"
	I1212 01:35:18.466371  291455 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-256959"
	I1212 01:35:18.466378  291455 addons.go:239] Setting addon dashboard=true in "newest-cni-256959"
	W1212 01:35:18.466385  291455 addons.go:248] addon dashboard should already be in state true
	I1212 01:35:18.466396  291455 host.go:66] Checking if "newest-cni-256959" exists ...
	I1212 01:35:18.466403  291455 host.go:66] Checking if "newest-cni-256959" exists ...
	I1212 01:35:18.466836  291455 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Status}}
	I1212 01:35:18.466869  291455 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Status}}
	I1212 01:35:18.467337  291455 addons.go:70] Setting default-storageclass=true in profile "newest-cni-256959"
	I1212 01:35:18.467363  291455 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-256959"
	I1212 01:35:18.467641  291455 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Status}}
	I1212 01:35:18.469758  291455 out.go:179] * Verifying Kubernetes components...
	I1212 01:35:18.473053  291455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:35:18.505578  291455 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:35:18.507992  291455 addons.go:239] Setting addon default-storageclass=true in "newest-cni-256959"
	I1212 01:35:18.508032  291455 host.go:66] Checking if "newest-cni-256959" exists ...
	I1212 01:35:18.508443  291455 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Status}}
	I1212 01:35:18.515343  291455 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:35:18.515364  291455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 01:35:18.515428  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:18.518345  291455 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1212 01:35:18.523100  291455 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1212 01:35:17.036393  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:19.036650  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:35:18.525972  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1212 01:35:18.526002  291455 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1212 01:35:18.526079  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:18.564602  291455 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 01:35:18.564630  291455 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 01:35:18.564700  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:18.565404  291455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:35:18.592490  291455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:35:18.614974  291455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:35:18.707284  291455 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:35:18.738514  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:35:18.783779  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1212 01:35:18.783804  291455 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1212 01:35:18.797813  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:35:18.817201  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1212 01:35:18.817275  291455 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1212 01:35:18.834247  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1212 01:35:18.834268  291455 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1212 01:35:18.850261  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1212 01:35:18.850281  291455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1212 01:35:18.864878  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1212 01:35:18.864902  291455 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1212 01:35:18.879989  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1212 01:35:18.880012  291455 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1212 01:35:18.893252  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1212 01:35:18.893275  291455 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1212 01:35:18.906457  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1212 01:35:18.906522  291455 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1212 01:35:18.919410  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 01:35:18.919484  291455 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1212 01:35:18.931957  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 01:35:19.295481  291455 api_server.go:52] waiting for apiserver process to appear ...
	W1212 01:35:19.295638  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.295690  291455 retry.go:31] will retry after 249.842732ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:35:19.295768  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.295783  291455 retry.go:31] will retry after 351.420897ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:35:19.296118  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.296142  291455 retry.go:31] will retry after 281.426587ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.296213  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:19.546048  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:35:19.578494  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:35:19.622946  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.623064  291455 retry.go:31] will retry after 277.166543ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.648375  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:35:19.656309  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.656406  291455 retry.go:31] will retry after 462.607475ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:35:19.715463  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.715506  291455 retry.go:31] will retry after 556.232924ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.796674  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:19.900383  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:35:19.963236  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.963266  291455 retry.go:31] will retry after 505.253944ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.119589  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:35:20.186519  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.186613  291455 retry.go:31] will retry after 424.835438ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.272893  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:35:20.296648  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:20.336051  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.336183  291455 retry.go:31] will retry after 483.909657ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.469348  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:35:20.528062  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.528096  291455 retry.go:31] will retry after 804.643976ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.612336  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:35:20.682501  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.682548  291455 retry.go:31] will retry after 558.97301ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.795783  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:20.820454  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:35:20.905698  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.905732  291455 retry.go:31] will retry after 695.755311ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:21.242222  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 01:35:21.295663  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:21.312788  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:21.312824  291455 retry.go:31] will retry after 1.866088371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:21.333223  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:35:21.536481  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:23.536603  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:21.395495  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:21.395527  291455 retry.go:31] will retry after 1.442265452s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:21.601699  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:35:21.661918  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:21.661958  291455 retry.go:31] will retry after 965.923553ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:21.796193  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:22.296596  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:22.628164  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:35:22.689983  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:22.690024  291455 retry.go:31] will retry after 2.419076287s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:22.796215  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:22.838490  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:35:22.896567  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:22.896595  291455 retry.go:31] will retry after 1.026441386s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:23.180088  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:35:23.242606  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:23.242641  291455 retry.go:31] will retry after 1.447175367s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:23.295985  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:23.795677  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:23.924269  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:35:23.999262  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:23.999301  291455 retry.go:31] will retry after 3.676300513s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:24.295744  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:24.690891  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:35:24.751142  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:24.751178  291455 retry.go:31] will retry after 2.523379824s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:24.796474  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:25.109290  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:35:25.170081  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:25.170117  291455 retry.go:31] will retry after 1.61445699s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:25.296317  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:25.796411  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:26.295885  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:26.036848  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:28.536033  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:35:26.784844  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:35:26.796101  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:26.910864  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:26.910893  291455 retry.go:31] will retry after 5.25056634s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:27.275356  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 01:35:27.295815  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:27.348749  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:27.348785  291455 retry.go:31] will retry after 4.97523733s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:27.676221  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:35:27.738144  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:27.738177  291455 retry.go:31] will retry after 5.096436926s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:27.796329  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:28.296194  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:28.795721  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:29.296646  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:29.795689  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:30.295694  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:30.796607  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:31.296202  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:30.536109  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:32.536508  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:35.036562  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:35:31.795914  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:32.161653  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:35:32.223763  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:32.223796  291455 retry.go:31] will retry after 3.268815276s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:32.296204  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:32.325119  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:35:32.386121  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:32.386153  291455 retry.go:31] will retry after 5.854435808s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:32.796226  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:32.834968  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:35:32.909984  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:32.910017  291455 retry.go:31] will retry after 7.163447884s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:33.296541  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:33.796667  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:34.295628  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:34.796652  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:35.295756  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:35.493366  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:35:35.556021  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:35.556054  291455 retry.go:31] will retry after 12.955659755s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:35.796356  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:36.296236  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:37.036788  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:39.536591  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:35:36.796391  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:37.295746  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:37.795722  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:38.241525  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 01:35:38.295983  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:38.315189  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:38.315224  291455 retry.go:31] will retry after 8.402358708s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:38.795800  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:39.296313  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:39.795769  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:40.074570  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:35:40.142371  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:40.142407  291455 retry.go:31] will retry after 11.797804339s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:40.295684  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:40.795715  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:41.295800  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:42.035934  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:44.036480  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:35:41.796201  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:42.295677  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:42.795870  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:43.296206  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:43.795818  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:44.295727  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:44.795706  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:45.296501  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:45.795731  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:46.296084  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:46.536110  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:48.536515  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:35:46.717860  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:35:46.778291  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:46.778324  291455 retry.go:31] will retry after 11.640937008s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:46.796419  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:47.296365  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:47.796242  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:48.295728  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:48.512617  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:35:48.620306  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:48.620334  291455 retry.go:31] will retry after 20.936993287s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:48.795684  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:49.296228  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:49.796588  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:50.296351  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:50.796261  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:51.296609  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:50.536753  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:53.036546  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:35:51.796731  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:51.941351  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:35:52.001637  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:52.001682  291455 retry.go:31] will retry after 15.364088557s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:52.296092  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:52.795636  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:53.296512  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:53.811922  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:54.295780  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:54.795777  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:55.296163  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:55.796273  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:56.295752  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:55.535981  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:57.536499  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:59.536582  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:35:56.795693  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:57.295887  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:57.796459  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:58.296209  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:58.419661  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:35:58.488403  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:58.488438  291455 retry.go:31] will retry after 29.791340434s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:58.796698  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:59.295744  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:59.796477  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:00.295794  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:00.795759  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:01.296237  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:36:02.036574  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:04.036717  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:01.796304  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:02.296424  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:02.795750  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:03.296298  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:03.796668  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:04.296158  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:04.796345  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:05.296665  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:05.796526  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:06.295717  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:36:06.536543  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:09.036693  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:06.795806  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:07.296383  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:07.366524  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:36:07.433303  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:36:07.433335  291455 retry.go:31] will retry after 21.959421138s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:36:07.795756  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:08.296562  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:08.795685  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:09.295744  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:09.558068  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:36:09.643748  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:36:09.643785  291455 retry.go:31] will retry after 31.140330108s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:36:09.796018  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:10.295683  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:10.795744  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:11.295780  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:36:11.536613  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:13.536774  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:11.795645  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:12.295717  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:12.795762  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:13.296234  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:13.795775  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:14.296543  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:14.796297  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:15.295763  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:15.795884  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:16.296551  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:36:16.036849  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:18.536512  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:16.796640  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:17.295760  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:17.796208  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:18.296641  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:18.795858  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:18.795946  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:18.819559  291455 cri.go:89] found id: ""
	I1212 01:36:18.819585  291455 logs.go:282] 0 containers: []
	W1212 01:36:18.819594  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:18.819605  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:18.819671  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:18.843419  291455 cri.go:89] found id: ""
	I1212 01:36:18.843444  291455 logs.go:282] 0 containers: []
	W1212 01:36:18.843453  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:18.843459  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:18.843524  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:18.867870  291455 cri.go:89] found id: ""
	I1212 01:36:18.867894  291455 logs.go:282] 0 containers: []
	W1212 01:36:18.867903  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:18.867910  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:18.867975  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:18.892504  291455 cri.go:89] found id: ""
	I1212 01:36:18.892528  291455 logs.go:282] 0 containers: []
	W1212 01:36:18.892536  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:18.892543  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:18.892614  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:18.916462  291455 cri.go:89] found id: ""
	I1212 01:36:18.916484  291455 logs.go:282] 0 containers: []
	W1212 01:36:18.916493  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:18.916499  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:18.916555  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:18.940793  291455 cri.go:89] found id: ""
	I1212 01:36:18.940818  291455 logs.go:282] 0 containers: []
	W1212 01:36:18.940827  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:18.940833  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:18.940892  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:18.965485  291455 cri.go:89] found id: ""
	I1212 01:36:18.965513  291455 logs.go:282] 0 containers: []
	W1212 01:36:18.965521  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:18.965527  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:18.965585  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:18.990141  291455 cri.go:89] found id: ""
	I1212 01:36:18.990170  291455 logs.go:282] 0 containers: []
	W1212 01:36:18.990179  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:18.990189  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:18.990202  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:19.044826  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:19.044860  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:19.058338  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:19.058373  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:19.121541  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:19.113010    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:19.113711    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:19.115490    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:19.116077    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:19.117640    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:19.113010    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:19.113711    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:19.115490    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:19.116077    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:19.117640    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:19.121602  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:19.121622  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:19.146904  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:19.146941  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:36:21.036609  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:23.536552  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:21.678937  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:21.689641  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:21.689710  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:21.722833  291455 cri.go:89] found id: ""
	I1212 01:36:21.722854  291455 logs.go:282] 0 containers: []
	W1212 01:36:21.722862  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:21.722869  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:21.722926  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:21.747286  291455 cri.go:89] found id: ""
	I1212 01:36:21.747323  291455 logs.go:282] 0 containers: []
	W1212 01:36:21.747339  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:21.747346  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:21.747417  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:21.771941  291455 cri.go:89] found id: ""
	I1212 01:36:21.771965  291455 logs.go:282] 0 containers: []
	W1212 01:36:21.771980  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:21.771987  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:21.772052  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:21.801075  291455 cri.go:89] found id: ""
	I1212 01:36:21.801104  291455 logs.go:282] 0 containers: []
	W1212 01:36:21.801113  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:21.801119  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:21.801176  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:21.825561  291455 cri.go:89] found id: ""
	I1212 01:36:21.825587  291455 logs.go:282] 0 containers: []
	W1212 01:36:21.825595  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:21.825601  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:21.825659  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:21.854532  291455 cri.go:89] found id: ""
	I1212 01:36:21.854559  291455 logs.go:282] 0 containers: []
	W1212 01:36:21.854569  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:21.854580  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:21.854640  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:21.879725  291455 cri.go:89] found id: ""
	I1212 01:36:21.879789  291455 logs.go:282] 0 containers: []
	W1212 01:36:21.879814  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:21.879828  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:21.879912  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:21.904405  291455 cri.go:89] found id: ""
	I1212 01:36:21.904428  291455 logs.go:282] 0 containers: []
	W1212 01:36:21.904437  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:21.904446  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:21.904487  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:21.970611  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:21.962223    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:21.962657    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:21.964375    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:21.964860    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:21.966282    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:21.962223    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:21.962657    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:21.964375    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:21.964860    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:21.966282    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:21.970642  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:21.970659  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:21.995425  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:21.995463  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:22.024736  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:22.024767  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:22.082740  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:22.082785  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:24.597828  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:24.608497  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:24.608573  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:24.633951  291455 cri.go:89] found id: ""
	I1212 01:36:24.633978  291455 logs.go:282] 0 containers: []
	W1212 01:36:24.633986  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:24.633992  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:24.634048  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:24.658904  291455 cri.go:89] found id: ""
	I1212 01:36:24.658929  291455 logs.go:282] 0 containers: []
	W1212 01:36:24.658937  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:24.658944  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:24.659026  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:24.683684  291455 cri.go:89] found id: ""
	I1212 01:36:24.683709  291455 logs.go:282] 0 containers: []
	W1212 01:36:24.683718  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:24.683724  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:24.683791  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:24.708745  291455 cri.go:89] found id: ""
	I1212 01:36:24.708770  291455 logs.go:282] 0 containers: []
	W1212 01:36:24.708779  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:24.708786  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:24.708842  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:24.733454  291455 cri.go:89] found id: ""
	I1212 01:36:24.733479  291455 logs.go:282] 0 containers: []
	W1212 01:36:24.733488  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:24.733494  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:24.733551  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:24.761862  291455 cri.go:89] found id: ""
	I1212 01:36:24.761889  291455 logs.go:282] 0 containers: []
	W1212 01:36:24.761898  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:24.761904  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:24.761961  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:24.785388  291455 cri.go:89] found id: ""
	I1212 01:36:24.785415  291455 logs.go:282] 0 containers: []
	W1212 01:36:24.785424  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:24.785430  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:24.785486  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:24.810681  291455 cri.go:89] found id: ""
	I1212 01:36:24.810707  291455 logs.go:282] 0 containers: []
	W1212 01:36:24.810717  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:24.810727  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:24.810743  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:24.865711  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:24.865752  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:24.880399  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:24.880431  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:24.943187  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:24.935391    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:24.936083    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:24.937614    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:24.937904    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:24.939457    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:24.935391    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:24.936083    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:24.937614    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:24.937904    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:24.939457    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:24.943253  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:24.943274  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:24.967790  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:24.967820  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:36:26.036483  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:28.036687  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:30.036781  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:27.495634  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:27.506605  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:27.506700  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:27.548836  291455 cri.go:89] found id: ""
	I1212 01:36:27.548864  291455 logs.go:282] 0 containers: []
	W1212 01:36:27.548873  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:27.548879  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:27.548953  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:27.600295  291455 cri.go:89] found id: ""
	I1212 01:36:27.600324  291455 logs.go:282] 0 containers: []
	W1212 01:36:27.600334  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:27.600340  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:27.600397  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:27.625951  291455 cri.go:89] found id: ""
	I1212 01:36:27.625979  291455 logs.go:282] 0 containers: []
	W1212 01:36:27.625987  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:27.625993  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:27.626062  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:27.651635  291455 cri.go:89] found id: ""
	I1212 01:36:27.651660  291455 logs.go:282] 0 containers: []
	W1212 01:36:27.651668  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:27.651675  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:27.651734  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:27.676415  291455 cri.go:89] found id: ""
	I1212 01:36:27.676437  291455 logs.go:282] 0 containers: []
	W1212 01:36:27.676446  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:27.676473  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:27.676535  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:27.699845  291455 cri.go:89] found id: ""
	I1212 01:36:27.699868  291455 logs.go:282] 0 containers: []
	W1212 01:36:27.699876  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:27.699883  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:27.699938  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:27.735327  291455 cri.go:89] found id: ""
	I1212 01:36:27.735353  291455 logs.go:282] 0 containers: []
	W1212 01:36:27.735362  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:27.735368  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:27.735428  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:27.759909  291455 cri.go:89] found id: ""
	I1212 01:36:27.759932  291455 logs.go:282] 0 containers: []
	W1212 01:36:27.759940  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:27.759950  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:27.759961  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:27.786638  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:27.786667  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:27.841026  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:27.841058  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:27.854475  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:27.854508  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:27.917832  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:27.909374    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:27.909866    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:27.911432    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:27.911952    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:27.913437    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:27.909374    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:27.909866    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:27.911432    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:27.911952    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:27.913437    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:27.917855  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:27.917867  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:28.286241  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:36:28.389245  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:36:28.389279  291455 retry.go:31] will retry after 46.053342505s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:36:29.393036  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:36:29.455460  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:36:29.455496  291455 retry.go:31] will retry after 47.570792587s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:36:30.443136  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:30.453668  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:30.453743  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:30.480117  291455 cri.go:89] found id: ""
	I1212 01:36:30.480141  291455 logs.go:282] 0 containers: []
	W1212 01:36:30.480149  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:30.480155  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:30.480214  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:30.505432  291455 cri.go:89] found id: ""
	I1212 01:36:30.505460  291455 logs.go:282] 0 containers: []
	W1212 01:36:30.505470  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:30.505478  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:30.505543  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:30.530571  291455 cri.go:89] found id: ""
	I1212 01:36:30.530598  291455 logs.go:282] 0 containers: []
	W1212 01:36:30.530608  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:30.530614  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:30.530675  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:30.587393  291455 cri.go:89] found id: ""
	I1212 01:36:30.587429  291455 logs.go:282] 0 containers: []
	W1212 01:36:30.587439  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:30.587445  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:30.587517  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:30.631827  291455 cri.go:89] found id: ""
	I1212 01:36:30.631894  291455 logs.go:282] 0 containers: []
	W1212 01:36:30.631917  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:30.631941  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:30.632019  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:30.655968  291455 cri.go:89] found id: ""
	I1212 01:36:30.656043  291455 logs.go:282] 0 containers: []
	W1212 01:36:30.656065  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:30.656077  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:30.656143  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:30.680079  291455 cri.go:89] found id: ""
	I1212 01:36:30.680101  291455 logs.go:282] 0 containers: []
	W1212 01:36:30.680110  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:30.680116  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:30.680175  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:30.704249  291455 cri.go:89] found id: ""
	I1212 01:36:30.704324  291455 logs.go:282] 0 containers: []
	W1212 01:36:30.704346  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:30.704365  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:30.704391  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:30.760587  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:30.760620  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:30.774118  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:30.774145  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:30.838730  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:30.831029    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:30.831642    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:30.833120    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:30.833546    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:30.835035    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:30.831029    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:30.831642    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:30.833120    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:30.833546    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:30.835035    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:30.838753  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:30.838765  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:30.863650  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:30.863684  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:36:32.039431  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:34.536636  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:33.391024  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:33.401417  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:33.401486  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:33.425243  291455 cri.go:89] found id: ""
	I1212 01:36:33.425265  291455 logs.go:282] 0 containers: []
	W1212 01:36:33.425274  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:33.425280  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:33.425337  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:33.451769  291455 cri.go:89] found id: ""
	I1212 01:36:33.451792  291455 logs.go:282] 0 containers: []
	W1212 01:36:33.451800  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:33.451806  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:33.451869  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:33.476935  291455 cri.go:89] found id: ""
	I1212 01:36:33.476960  291455 logs.go:282] 0 containers: []
	W1212 01:36:33.476968  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:33.476974  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:33.477035  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:33.502755  291455 cri.go:89] found id: ""
	I1212 01:36:33.502781  291455 logs.go:282] 0 containers: []
	W1212 01:36:33.502796  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:33.502802  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:33.502859  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:33.528810  291455 cri.go:89] found id: ""
	I1212 01:36:33.528835  291455 logs.go:282] 0 containers: []
	W1212 01:36:33.528844  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:33.528851  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:33.528915  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:33.559119  291455 cri.go:89] found id: ""
	I1212 01:36:33.559197  291455 logs.go:282] 0 containers: []
	W1212 01:36:33.559219  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:33.559237  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:33.559321  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:33.624518  291455 cri.go:89] found id: ""
	I1212 01:36:33.624547  291455 logs.go:282] 0 containers: []
	W1212 01:36:33.624556  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:33.624562  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:33.624620  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:33.657379  291455 cri.go:89] found id: ""
	I1212 01:36:33.657401  291455 logs.go:282] 0 containers: []
	W1212 01:36:33.657409  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:33.657418  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:33.657428  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:33.713396  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:33.713430  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:33.727420  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:33.727450  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:33.796759  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:33.788822    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:33.789567    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:33.791169    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:33.791683    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:33.792822    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:33.788822    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:33.789567    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:33.791169    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:33.791683    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:33.792822    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:33.796782  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:33.796795  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:33.822210  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:33.822246  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:36:37.036646  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:39.036700  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:36.350581  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:36.361065  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:36.361139  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:36.384625  291455 cri.go:89] found id: ""
	I1212 01:36:36.384647  291455 logs.go:282] 0 containers: []
	W1212 01:36:36.384655  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:36.384661  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:36.384721  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:36.409313  291455 cri.go:89] found id: ""
	I1212 01:36:36.409338  291455 logs.go:282] 0 containers: []
	W1212 01:36:36.409347  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:36.409353  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:36.409414  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:36.437773  291455 cri.go:89] found id: ""
	I1212 01:36:36.437796  291455 logs.go:282] 0 containers: []
	W1212 01:36:36.437804  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:36.437811  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:36.437875  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:36.462058  291455 cri.go:89] found id: ""
	I1212 01:36:36.462080  291455 logs.go:282] 0 containers: []
	W1212 01:36:36.462089  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:36.462096  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:36.462158  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:36.485881  291455 cri.go:89] found id: ""
	I1212 01:36:36.485902  291455 logs.go:282] 0 containers: []
	W1212 01:36:36.485911  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:36.485917  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:36.485973  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:36.510249  291455 cri.go:89] found id: ""
	I1212 01:36:36.510318  291455 logs.go:282] 0 containers: []
	W1212 01:36:36.510340  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:36.510362  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:36.510444  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:36.546913  291455 cri.go:89] found id: ""
	I1212 01:36:36.546948  291455 logs.go:282] 0 containers: []
	W1212 01:36:36.546957  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:36.546963  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:36.547067  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:36.604532  291455 cri.go:89] found id: ""
	I1212 01:36:36.604562  291455 logs.go:282] 0 containers: []
	W1212 01:36:36.604571  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:36.604580  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:36.604593  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:36.684036  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:36.674581    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:36.675420    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:36.677203    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:36.677878    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:36.679666    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:36.674581    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:36.675420    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:36.677203    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:36.677878    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:36.679666    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:36.684061  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:36.684074  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:36.709835  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:36.709866  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:36.737742  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:36.737768  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:36.792829  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:36.792864  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:39.307416  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:39.317852  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:39.317952  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:39.342723  291455 cri.go:89] found id: ""
	I1212 01:36:39.342747  291455 logs.go:282] 0 containers: []
	W1212 01:36:39.342756  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:39.342763  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:39.342821  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:39.367433  291455 cri.go:89] found id: ""
	I1212 01:36:39.367472  291455 logs.go:282] 0 containers: []
	W1212 01:36:39.367485  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:39.367492  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:39.367559  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:39.392871  291455 cri.go:89] found id: ""
	I1212 01:36:39.392896  291455 logs.go:282] 0 containers: []
	W1212 01:36:39.392904  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:39.392911  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:39.392974  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:39.417519  291455 cri.go:89] found id: ""
	I1212 01:36:39.417546  291455 logs.go:282] 0 containers: []
	W1212 01:36:39.417555  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:39.417562  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:39.417621  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:39.441729  291455 cri.go:89] found id: ""
	I1212 01:36:39.441760  291455 logs.go:282] 0 containers: []
	W1212 01:36:39.441769  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:39.441775  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:39.441841  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:39.466118  291455 cri.go:89] found id: ""
	I1212 01:36:39.466147  291455 logs.go:282] 0 containers: []
	W1212 01:36:39.466156  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:39.466163  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:39.466225  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:39.491269  291455 cri.go:89] found id: ""
	I1212 01:36:39.491292  291455 logs.go:282] 0 containers: []
	W1212 01:36:39.491304  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:39.491310  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:39.491375  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:39.515625  291455 cri.go:89] found id: ""
	I1212 01:36:39.515650  291455 logs.go:282] 0 containers: []
	W1212 01:36:39.515659  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:39.515668  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:39.515679  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:39.595337  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:39.595376  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:39.617464  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:39.617500  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:39.698043  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:39.689431    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:39.689924    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:39.691689    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:39.692010    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:39.693641    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:39.689431    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:39.689924    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:39.691689    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:39.692010    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:39.693641    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:39.698068  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:39.698080  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:39.722656  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:39.722692  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:40.784380  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:36:40.845895  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:36:40.846018  291455 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1212 01:36:41.536608  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:44.036506  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:42.256252  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:42.269504  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:42.269576  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:42.296285  291455 cri.go:89] found id: ""
	I1212 01:36:42.296314  291455 logs.go:282] 0 containers: []
	W1212 01:36:42.296323  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:42.296330  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:42.296393  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:42.324314  291455 cri.go:89] found id: ""
	I1212 01:36:42.324349  291455 logs.go:282] 0 containers: []
	W1212 01:36:42.324366  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:42.324373  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:42.324448  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:42.353000  291455 cri.go:89] found id: ""
	I1212 01:36:42.353024  291455 logs.go:282] 0 containers: []
	W1212 01:36:42.353033  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:42.353039  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:42.353103  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:42.379029  291455 cri.go:89] found id: ""
	I1212 01:36:42.379057  291455 logs.go:282] 0 containers: []
	W1212 01:36:42.379066  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:42.379073  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:42.379141  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:42.404039  291455 cri.go:89] found id: ""
	I1212 01:36:42.404068  291455 logs.go:282] 0 containers: []
	W1212 01:36:42.404077  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:42.404084  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:42.404150  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:42.429848  291455 cri.go:89] found id: ""
	I1212 01:36:42.429877  291455 logs.go:282] 0 containers: []
	W1212 01:36:42.429887  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:42.429893  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:42.429952  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:42.454022  291455 cri.go:89] found id: ""
	I1212 01:36:42.454049  291455 logs.go:282] 0 containers: []
	W1212 01:36:42.454058  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:42.454065  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:42.454126  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:42.481205  291455 cri.go:89] found id: ""
	I1212 01:36:42.481231  291455 logs.go:282] 0 containers: []
	W1212 01:36:42.481240  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:42.481249  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:42.481260  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:42.511373  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:42.511400  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:42.594053  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:42.594092  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:42.613172  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:42.613201  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:42.688118  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:42.678899    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:42.679678    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:42.681197    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:42.681708    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:42.683477    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:42.678899    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:42.679678    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:42.681197    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:42.681708    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:42.683477    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:42.688142  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:42.688155  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:45.213644  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:45.234582  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:45.234677  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:45.268686  291455 cri.go:89] found id: ""
	I1212 01:36:45.268715  291455 logs.go:282] 0 containers: []
	W1212 01:36:45.268732  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:45.268741  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:45.268827  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:45.297061  291455 cri.go:89] found id: ""
	I1212 01:36:45.297115  291455 logs.go:282] 0 containers: []
	W1212 01:36:45.297132  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:45.297139  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:45.297272  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:45.324030  291455 cri.go:89] found id: ""
	I1212 01:36:45.324063  291455 logs.go:282] 0 containers: []
	W1212 01:36:45.324072  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:45.324078  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:45.324144  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:45.354569  291455 cri.go:89] found id: ""
	I1212 01:36:45.354595  291455 logs.go:282] 0 containers: []
	W1212 01:36:45.354612  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:45.354619  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:45.354697  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:45.380068  291455 cri.go:89] found id: ""
	I1212 01:36:45.380133  291455 logs.go:282] 0 containers: []
	W1212 01:36:45.380160  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:45.380175  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:45.380249  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:45.403554  291455 cri.go:89] found id: ""
	I1212 01:36:45.403620  291455 logs.go:282] 0 containers: []
	W1212 01:36:45.403643  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:45.403664  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:45.403746  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:45.426534  291455 cri.go:89] found id: ""
	I1212 01:36:45.426560  291455 logs.go:282] 0 containers: []
	W1212 01:36:45.426568  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:45.426574  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:45.426637  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:45.455346  291455 cri.go:89] found id: ""
	I1212 01:36:45.455414  291455 logs.go:282] 0 containers: []
	W1212 01:36:45.455438  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:45.455457  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:45.455469  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:45.510486  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:45.510521  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:45.523916  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:45.523944  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:45.642152  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:45.624680    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:45.625385    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:45.635164    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:45.635878    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:45.637755    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:45.624680    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:45.625385    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:45.635164    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:45.635878    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:45.637755    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:45.642173  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:45.642186  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:45.667625  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:45.667661  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:36:46.535816  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:48.537737  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:48.197188  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:48.208199  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:48.208272  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:48.236943  291455 cri.go:89] found id: ""
	I1212 01:36:48.236969  291455 logs.go:282] 0 containers: []
	W1212 01:36:48.236977  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:48.236984  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:48.237048  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:48.262444  291455 cri.go:89] found id: ""
	I1212 01:36:48.262468  291455 logs.go:282] 0 containers: []
	W1212 01:36:48.262477  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:48.262483  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:48.262545  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:48.292262  291455 cri.go:89] found id: ""
	I1212 01:36:48.292292  291455 logs.go:282] 0 containers: []
	W1212 01:36:48.292301  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:48.292307  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:48.292370  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:48.318028  291455 cri.go:89] found id: ""
	I1212 01:36:48.318053  291455 logs.go:282] 0 containers: []
	W1212 01:36:48.318063  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:48.318069  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:48.318128  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:48.343500  291455 cri.go:89] found id: ""
	I1212 01:36:48.343524  291455 logs.go:282] 0 containers: []
	W1212 01:36:48.343532  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:48.343539  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:48.343620  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:48.374537  291455 cri.go:89] found id: ""
	I1212 01:36:48.374563  291455 logs.go:282] 0 containers: []
	W1212 01:36:48.374572  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:48.374578  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:48.374657  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:48.399165  291455 cri.go:89] found id: ""
	I1212 01:36:48.399188  291455 logs.go:282] 0 containers: []
	W1212 01:36:48.399197  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:48.399203  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:48.399265  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:48.424429  291455 cri.go:89] found id: ""
	I1212 01:36:48.424452  291455 logs.go:282] 0 containers: []
	W1212 01:36:48.424460  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:48.424469  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:48.424482  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:48.450297  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:48.450336  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:48.477992  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:48.478017  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:48.533513  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:48.533546  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:48.554972  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:48.555078  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:48.639199  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:48.628523    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:48.629323    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:48.630881    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:48.631460    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:48.634979    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:48.628523    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:48.629323    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:48.630881    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:48.631460    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:48.634979    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:51.139443  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:51.152801  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:51.152869  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:51.181036  291455 cri.go:89] found id: ""
	I1212 01:36:51.181060  291455 logs.go:282] 0 containers: []
	W1212 01:36:51.181069  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:51.181076  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:51.181139  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:51.205637  291455 cri.go:89] found id: ""
	I1212 01:36:51.205664  291455 logs.go:282] 0 containers: []
	W1212 01:36:51.205673  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:51.205680  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:51.205744  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:51.230375  291455 cri.go:89] found id: ""
	I1212 01:36:51.230401  291455 logs.go:282] 0 containers: []
	W1212 01:36:51.230410  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:51.230416  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:51.230479  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:51.260594  291455 cri.go:89] found id: ""
	I1212 01:36:51.260620  291455 logs.go:282] 0 containers: []
	W1212 01:36:51.260629  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:51.260636  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:51.260693  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:51.286513  291455 cri.go:89] found id: ""
	I1212 01:36:51.286538  291455 logs.go:282] 0 containers: []
	W1212 01:36:51.286548  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:51.286554  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:51.286613  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:51.320488  291455 cri.go:89] found id: ""
	I1212 01:36:51.320511  291455 logs.go:282] 0 containers: []
	W1212 01:36:51.320519  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:51.320526  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:51.320593  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	W1212 01:36:51.035818  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:53.036491  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:55.036601  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:51.346751  291455 cri.go:89] found id: ""
	I1212 01:36:51.346773  291455 logs.go:282] 0 containers: []
	W1212 01:36:51.346782  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:51.346788  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:51.346848  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:51.372774  291455 cri.go:89] found id: ""
	I1212 01:36:51.372797  291455 logs.go:282] 0 containers: []
	W1212 01:36:51.372805  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:51.372820  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:51.372832  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:51.397287  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:51.397322  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:51.424395  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:51.424423  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:51.484364  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:51.484400  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:51.497751  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:51.497778  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:51.609432  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:51.593650    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:51.595213    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:51.596974    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:51.601995    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:51.602562    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:51.593650    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:51.595213    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:51.596974    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:51.601995    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:51.602562    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:54.111055  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:54.123333  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:54.123404  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:54.147152  291455 cri.go:89] found id: ""
	I1212 01:36:54.147218  291455 logs.go:282] 0 containers: []
	W1212 01:36:54.147246  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:54.147268  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:54.147370  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:54.172120  291455 cri.go:89] found id: ""
	I1212 01:36:54.172186  291455 logs.go:282] 0 containers: []
	W1212 01:36:54.172212  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:54.172233  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:54.172318  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:54.199177  291455 cri.go:89] found id: ""
	I1212 01:36:54.199242  291455 logs.go:282] 0 containers: []
	W1212 01:36:54.199262  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:54.199269  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:54.199346  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:54.223691  291455 cri.go:89] found id: ""
	I1212 01:36:54.223716  291455 logs.go:282] 0 containers: []
	W1212 01:36:54.223724  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:54.223731  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:54.223796  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:54.248969  291455 cri.go:89] found id: ""
	I1212 01:36:54.248991  291455 logs.go:282] 0 containers: []
	W1212 01:36:54.249000  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:54.249007  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:54.249076  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:54.274124  291455 cri.go:89] found id: ""
	I1212 01:36:54.274149  291455 logs.go:282] 0 containers: []
	W1212 01:36:54.274158  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:54.274165  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:54.274223  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:54.299049  291455 cri.go:89] found id: ""
	I1212 01:36:54.299071  291455 logs.go:282] 0 containers: []
	W1212 01:36:54.299079  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:54.299085  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:54.299142  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:54.323692  291455 cri.go:89] found id: ""
	I1212 01:36:54.323727  291455 logs.go:282] 0 containers: []
	W1212 01:36:54.323736  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:54.323745  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:54.323757  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:54.337075  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:54.337102  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:54.405905  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:54.396717    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:54.397409    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:54.399032    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:54.399536    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:54.401700    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:54.396717    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:54.397409    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:54.399032    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:54.399536    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:54.401700    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:54.405927  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:54.405938  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:54.432446  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:54.432489  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:54.461143  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:54.461170  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1212 01:36:57.536480  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:59.536672  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:57.017892  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:57.031680  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:57.031754  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:57.058619  291455 cri.go:89] found id: ""
	I1212 01:36:57.058644  291455 logs.go:282] 0 containers: []
	W1212 01:36:57.058661  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:57.058670  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:57.058744  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:57.082470  291455 cri.go:89] found id: ""
	I1212 01:36:57.082496  291455 logs.go:282] 0 containers: []
	W1212 01:36:57.082505  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:57.082511  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:57.082569  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:57.107129  291455 cri.go:89] found id: ""
	I1212 01:36:57.107152  291455 logs.go:282] 0 containers: []
	W1212 01:36:57.107161  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:57.107174  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:57.107235  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:57.131240  291455 cri.go:89] found id: ""
	I1212 01:36:57.131264  291455 logs.go:282] 0 containers: []
	W1212 01:36:57.131272  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:57.131282  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:57.131339  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:57.161702  291455 cri.go:89] found id: ""
	I1212 01:36:57.161728  291455 logs.go:282] 0 containers: []
	W1212 01:36:57.161737  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:57.161743  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:57.161800  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:57.186568  291455 cri.go:89] found id: ""
	I1212 01:36:57.186592  291455 logs.go:282] 0 containers: []
	W1212 01:36:57.186601  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:57.186607  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:57.186724  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:57.211286  291455 cri.go:89] found id: ""
	I1212 01:36:57.211310  291455 logs.go:282] 0 containers: []
	W1212 01:36:57.211319  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:57.211325  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:57.211382  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:57.236370  291455 cri.go:89] found id: ""
	I1212 01:36:57.236394  291455 logs.go:282] 0 containers: []
	W1212 01:36:57.236403  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:57.236412  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:57.236423  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:57.292504  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:57.292539  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:57.306287  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:57.306314  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:57.369836  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:57.361540    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:57.362207    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:57.363914    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:57.364465    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:57.366079    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:57.361540    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:57.362207    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:57.363914    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:57.364465    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:57.366079    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:57.369856  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:57.369870  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:57.395588  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:57.395625  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:59.923774  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:59.935843  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:59.935936  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:59.961362  291455 cri.go:89] found id: ""
	I1212 01:36:59.961383  291455 logs.go:282] 0 containers: []
	W1212 01:36:59.961392  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:59.961398  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:59.961453  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:59.987418  291455 cri.go:89] found id: ""
	I1212 01:36:59.987448  291455 logs.go:282] 0 containers: []
	W1212 01:36:59.987458  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:59.987463  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:59.987521  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:00.083321  291455 cri.go:89] found id: ""
	I1212 01:37:00.083352  291455 logs.go:282] 0 containers: []
	W1212 01:37:00.083362  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:00.083369  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:00.083456  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:00.200170  291455 cri.go:89] found id: ""
	I1212 01:37:00.200535  291455 logs.go:282] 0 containers: []
	W1212 01:37:00.200580  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:00.200686  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:00.201034  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:00.291145  291455 cri.go:89] found id: ""
	I1212 01:37:00.291235  291455 logs.go:282] 0 containers: []
	W1212 01:37:00.291284  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:00.291318  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:00.291414  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:00.393558  291455 cri.go:89] found id: ""
	I1212 01:37:00.393606  291455 logs.go:282] 0 containers: []
	W1212 01:37:00.393618  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:00.393626  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:00.393706  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:00.423985  291455 cri.go:89] found id: ""
	I1212 01:37:00.424023  291455 logs.go:282] 0 containers: []
	W1212 01:37:00.424035  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:00.424041  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:00.424117  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:00.451670  291455 cri.go:89] found id: ""
	I1212 01:37:00.451695  291455 logs.go:282] 0 containers: []
	W1212 01:37:00.451705  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:00.451715  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:00.451728  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:00.509577  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:00.509614  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:00.525099  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:00.525133  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:00.635419  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:00.627409    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:00.628095    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:00.629751    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:00.630057    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:00.631588    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:00.627409    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:00.628095    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:00.629751    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:00.630057    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:00.631588    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:00.635455  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:00.635468  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:00.663944  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:00.663984  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:37:02.037994  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:04.536623  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:03.194688  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:03.205352  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:03.205425  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:03.233099  291455 cri.go:89] found id: ""
	I1212 01:37:03.233131  291455 logs.go:282] 0 containers: []
	W1212 01:37:03.233140  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:03.233146  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:03.233217  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:03.257676  291455 cri.go:89] found id: ""
	I1212 01:37:03.257700  291455 logs.go:282] 0 containers: []
	W1212 01:37:03.257710  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:03.257716  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:03.257802  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:03.282622  291455 cri.go:89] found id: ""
	I1212 01:37:03.282696  291455 logs.go:282] 0 containers: []
	W1212 01:37:03.282719  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:03.282739  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:03.282834  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:03.309162  291455 cri.go:89] found id: ""
	I1212 01:37:03.309190  291455 logs.go:282] 0 containers: []
	W1212 01:37:03.309199  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:03.309205  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:03.309265  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:03.334284  291455 cri.go:89] found id: ""
	I1212 01:37:03.334318  291455 logs.go:282] 0 containers: []
	W1212 01:37:03.334327  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:03.334334  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:03.334401  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:03.361255  291455 cri.go:89] found id: ""
	I1212 01:37:03.361281  291455 logs.go:282] 0 containers: []
	W1212 01:37:03.361290  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:03.361296  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:03.361376  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:03.386372  291455 cri.go:89] found id: ""
	I1212 01:37:03.386406  291455 logs.go:282] 0 containers: []
	W1212 01:37:03.386415  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:03.386421  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:03.386490  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:03.412127  291455 cri.go:89] found id: ""
	I1212 01:37:03.412151  291455 logs.go:282] 0 containers: []
	W1212 01:37:03.412160  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:03.412170  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:03.412181  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:03.467933  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:03.467980  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:03.481636  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:03.481663  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:03.565451  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:03.551611    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:03.552450    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:03.553999    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:03.554567    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:03.556109    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:03.551611    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:03.552450    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:03.553999    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:03.554567    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:03.556109    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:03.565476  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:03.565548  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:03.614744  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:03.614783  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:06.159160  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:06.169841  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:06.169916  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:06.196496  291455 cri.go:89] found id: ""
	I1212 01:37:06.196521  291455 logs.go:282] 0 containers: []
	W1212 01:37:06.196529  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:06.196536  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:06.196594  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:06.229404  291455 cri.go:89] found id: ""
	I1212 01:37:06.229429  291455 logs.go:282] 0 containers: []
	W1212 01:37:06.229438  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:06.229444  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:06.229505  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:06.254056  291455 cri.go:89] found id: ""
	I1212 01:37:06.254081  291455 logs.go:282] 0 containers: []
	W1212 01:37:06.254089  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:06.254095  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:06.254154  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:06.278424  291455 cri.go:89] found id: ""
	I1212 01:37:06.278453  291455 logs.go:282] 0 containers: []
	W1212 01:37:06.278462  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:06.278469  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:06.278527  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:06.302517  291455 cri.go:89] found id: ""
	I1212 01:37:06.302545  291455 logs.go:282] 0 containers: []
	W1212 01:37:06.302554  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:06.302560  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:06.302617  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:06.328634  291455 cri.go:89] found id: ""
	I1212 01:37:06.328657  291455 logs.go:282] 0 containers: []
	W1212 01:37:06.328665  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:06.328671  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:06.328728  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	W1212 01:37:07.035836  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:09.035916  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:06.352026  291455 cri.go:89] found id: ""
	I1212 01:37:06.352099  291455 logs.go:282] 0 containers: []
	W1212 01:37:06.352115  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:06.352125  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:06.352199  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:06.376075  291455 cri.go:89] found id: ""
	I1212 01:37:06.376101  291455 logs.go:282] 0 containers: []
	W1212 01:37:06.376110  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:06.376119  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:06.376130  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:06.400451  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:06.400481  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:06.428356  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:06.428385  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:06.484230  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:06.484267  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:06.498047  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:06.498074  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:06.610705  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:06.593235    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:06.594305    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:06.599655    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:06.603092    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:06.603422    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:06.593235    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:06.594305    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:06.599655    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:06.603092    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:06.603422    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:09.111534  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:09.121786  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:09.121855  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:09.148241  291455 cri.go:89] found id: ""
	I1212 01:37:09.148267  291455 logs.go:282] 0 containers: []
	W1212 01:37:09.148275  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:09.148282  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:09.148341  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:09.172742  291455 cri.go:89] found id: ""
	I1212 01:37:09.172764  291455 logs.go:282] 0 containers: []
	W1212 01:37:09.172773  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:09.172779  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:09.172835  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:09.197560  291455 cri.go:89] found id: ""
	I1212 01:37:09.197586  291455 logs.go:282] 0 containers: []
	W1212 01:37:09.197595  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:09.197601  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:09.197673  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:09.222352  291455 cri.go:89] found id: ""
	I1212 01:37:09.222377  291455 logs.go:282] 0 containers: []
	W1212 01:37:09.222386  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:09.222392  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:09.222450  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:09.246770  291455 cri.go:89] found id: ""
	I1212 01:37:09.246794  291455 logs.go:282] 0 containers: []
	W1212 01:37:09.246802  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:09.246809  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:09.246875  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:09.273237  291455 cri.go:89] found id: ""
	I1212 01:37:09.273260  291455 logs.go:282] 0 containers: []
	W1212 01:37:09.273268  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:09.273275  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:09.273342  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:09.298382  291455 cri.go:89] found id: ""
	I1212 01:37:09.298405  291455 logs.go:282] 0 containers: []
	W1212 01:37:09.298414  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:09.298421  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:09.298479  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:09.326366  291455 cri.go:89] found id: ""
	I1212 01:37:09.326388  291455 logs.go:282] 0 containers: []
	W1212 01:37:09.326396  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:09.326405  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:09.326416  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:09.339892  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:09.339920  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:09.408533  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:09.399583    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:09.400465    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:09.402243    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:09.402860    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:09.404361    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:09.399583    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:09.400465    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:09.402243    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:09.402860    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:09.404361    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:09.408555  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:09.408568  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:09.434113  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:09.434149  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:09.469040  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:09.469065  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1212 01:37:11.036562  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:13.536873  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:12.025102  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:12.036649  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:12.036722  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:12.064882  291455 cri.go:89] found id: ""
	I1212 01:37:12.064905  291455 logs.go:282] 0 containers: []
	W1212 01:37:12.064913  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:12.064919  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:12.064979  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:12.090328  291455 cri.go:89] found id: ""
	I1212 01:37:12.090354  291455 logs.go:282] 0 containers: []
	W1212 01:37:12.090362  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:12.090369  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:12.090429  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:12.115640  291455 cri.go:89] found id: ""
	I1212 01:37:12.115665  291455 logs.go:282] 0 containers: []
	W1212 01:37:12.115674  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:12.115680  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:12.115741  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:12.140726  291455 cri.go:89] found id: ""
	I1212 01:37:12.140752  291455 logs.go:282] 0 containers: []
	W1212 01:37:12.140773  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:12.140810  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:12.140900  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:12.165182  291455 cri.go:89] found id: ""
	I1212 01:37:12.165208  291455 logs.go:282] 0 containers: []
	W1212 01:37:12.165216  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:12.165223  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:12.165282  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:12.189365  291455 cri.go:89] found id: ""
	I1212 01:37:12.189389  291455 logs.go:282] 0 containers: []
	W1212 01:37:12.189398  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:12.189405  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:12.189463  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:12.214048  291455 cri.go:89] found id: ""
	I1212 01:37:12.214073  291455 logs.go:282] 0 containers: []
	W1212 01:37:12.214082  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:12.214088  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:12.214148  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:12.240794  291455 cri.go:89] found id: ""
	I1212 01:37:12.240821  291455 logs.go:282] 0 containers: []
	W1212 01:37:12.240830  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:12.240840  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:12.240851  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:12.300894  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:12.300936  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:12.314783  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:12.314817  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:12.382362  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:12.373621    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:12.374371    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:12.376069    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:12.376636    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:12.378249    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:12.373621    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:12.374371    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:12.376069    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:12.376636    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:12.378249    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:12.382385  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:12.382397  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:12.408884  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:12.408921  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:14.444251  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:37:14.509220  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:37:14.509386  291455 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 01:37:14.942929  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:14.953301  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:14.953373  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:14.977865  291455 cri.go:89] found id: ""
	I1212 01:37:14.977933  291455 logs.go:282] 0 containers: []
	W1212 01:37:14.977947  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:14.977954  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:14.978019  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:15.012296  291455 cri.go:89] found id: ""
	I1212 01:37:15.012325  291455 logs.go:282] 0 containers: []
	W1212 01:37:15.012335  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:15.012342  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:15.012414  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:15.044602  291455 cri.go:89] found id: ""
	I1212 01:37:15.044629  291455 logs.go:282] 0 containers: []
	W1212 01:37:15.044638  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:15.044644  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:15.044705  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:15.072008  291455 cri.go:89] found id: ""
	I1212 01:37:15.072035  291455 logs.go:282] 0 containers: []
	W1212 01:37:15.072043  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:15.072049  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:15.072112  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:15.098264  291455 cri.go:89] found id: ""
	I1212 01:37:15.098293  291455 logs.go:282] 0 containers: []
	W1212 01:37:15.098308  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:15.098316  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:15.098390  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:15.124176  291455 cri.go:89] found id: ""
	I1212 01:37:15.124203  291455 logs.go:282] 0 containers: []
	W1212 01:37:15.124212  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:15.124218  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:15.124278  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:15.148763  291455 cri.go:89] found id: ""
	I1212 01:37:15.148788  291455 logs.go:282] 0 containers: []
	W1212 01:37:15.148797  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:15.148803  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:15.148880  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:15.173843  291455 cri.go:89] found id: ""
	I1212 01:37:15.173870  291455 logs.go:282] 0 containers: []
	W1212 01:37:15.173879  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:15.173889  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:15.173901  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:15.203728  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:15.203757  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:15.259019  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:15.259053  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:15.272480  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:15.272509  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:15.337558  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:15.329071    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:15.329763    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:15.331497    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:15.332089    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:15.333695    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:15.329071    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:15.329763    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:15.331497    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:15.332089    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:15.333695    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:15.337580  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:15.337592  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:17.027133  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:37:17.109229  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:37:17.109319  291455 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 01:37:17.112386  291455 out.go:179] * Enabled addons: 
	W1212 01:37:16.035841  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:18.035966  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:20.036082  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:17.115266  291455 addons.go:530] duration metric: took 1m58.649036473s for enable addons: enabled=[]
	I1212 01:37:17.864277  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:17.875687  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:17.875762  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:17.900504  291455 cri.go:89] found id: ""
	I1212 01:37:17.900527  291455 logs.go:282] 0 containers: []
	W1212 01:37:17.900536  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:17.900542  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:17.900626  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:17.925113  291455 cri.go:89] found id: ""
	I1212 01:37:17.925136  291455 logs.go:282] 0 containers: []
	W1212 01:37:17.925145  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:17.925151  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:17.925238  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:17.950585  291455 cri.go:89] found id: ""
	I1212 01:37:17.950611  291455 logs.go:282] 0 containers: []
	W1212 01:37:17.950620  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:17.950626  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:17.950687  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:17.977787  291455 cri.go:89] found id: ""
	I1212 01:37:17.977813  291455 logs.go:282] 0 containers: []
	W1212 01:37:17.977822  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:17.977828  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:17.977888  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:18.006885  291455 cri.go:89] found id: ""
	I1212 01:37:18.006967  291455 logs.go:282] 0 containers: []
	W1212 01:37:18.007019  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:18.007043  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:18.007118  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:18.033137  291455 cri.go:89] found id: ""
	I1212 01:37:18.033161  291455 logs.go:282] 0 containers: []
	W1212 01:37:18.033170  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:18.033176  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:18.033238  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:18.058968  291455 cri.go:89] found id: ""
	I1212 01:37:18.059009  291455 logs.go:282] 0 containers: []
	W1212 01:37:18.059019  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:18.059025  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:18.059087  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:18.084927  291455 cri.go:89] found id: ""
	I1212 01:37:18.084961  291455 logs.go:282] 0 containers: []
	W1212 01:37:18.084971  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:18.084981  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:18.084994  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:18.153070  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:18.145061    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:18.145891    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:18.147207    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:18.147819    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:18.149000    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:18.145061    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:18.145891    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:18.147207    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:18.147819    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:18.149000    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:18.153101  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:18.153113  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:18.178193  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:18.178227  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:18.205844  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:18.205874  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:18.261619  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:18.261657  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:20.775910  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:20.797119  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:20.797192  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:20.870519  291455 cri.go:89] found id: ""
	I1212 01:37:20.870556  291455 logs.go:282] 0 containers: []
	W1212 01:37:20.870566  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:20.870573  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:20.870642  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:20.895021  291455 cri.go:89] found id: ""
	I1212 01:37:20.895044  291455 logs.go:282] 0 containers: []
	W1212 01:37:20.895053  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:20.895059  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:20.895119  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:20.918242  291455 cri.go:89] found id: ""
	I1212 01:37:20.918270  291455 logs.go:282] 0 containers: []
	W1212 01:37:20.918279  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:20.918286  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:20.918340  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:20.942755  291455 cri.go:89] found id: ""
	I1212 01:37:20.942781  291455 logs.go:282] 0 containers: []
	W1212 01:37:20.942790  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:20.942796  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:20.942855  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:20.966487  291455 cri.go:89] found id: ""
	I1212 01:37:20.966551  291455 logs.go:282] 0 containers: []
	W1212 01:37:20.966574  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:20.966595  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:20.966680  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:20.992848  291455 cri.go:89] found id: ""
	I1212 01:37:20.992922  291455 logs.go:282] 0 containers: []
	W1212 01:37:20.992945  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:20.992959  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:20.993035  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:21.025558  291455 cri.go:89] found id: ""
	I1212 01:37:21.025587  291455 logs.go:282] 0 containers: []
	W1212 01:37:21.025596  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:21.025602  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:21.025663  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:21.050967  291455 cri.go:89] found id: ""
	I1212 01:37:21.051023  291455 logs.go:282] 0 containers: []
	W1212 01:37:21.051032  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:21.051041  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:21.051057  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:21.077368  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:21.077396  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:21.133503  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:21.133538  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:21.147218  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:21.147245  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:21.209763  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:21.201479    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:21.202138    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:21.203803    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:21.204409    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:21.205960    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:21.201479    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:21.202138    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:21.203803    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:21.204409    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:21.205960    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:21.209786  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:21.209799  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1212 01:37:22.036593  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:24.536591  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:23.737746  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:23.747983  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:23.748051  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:23.772289  291455 cri.go:89] found id: ""
	I1212 01:37:23.772315  291455 logs.go:282] 0 containers: []
	W1212 01:37:23.772333  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:23.772341  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:23.772420  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:23.848280  291455 cri.go:89] found id: ""
	I1212 01:37:23.848306  291455 logs.go:282] 0 containers: []
	W1212 01:37:23.848315  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:23.848322  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:23.848386  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:23.884675  291455 cri.go:89] found id: ""
	I1212 01:37:23.884700  291455 logs.go:282] 0 containers: []
	W1212 01:37:23.884709  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:23.884715  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:23.884777  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:23.914530  291455 cri.go:89] found id: ""
	I1212 01:37:23.914553  291455 logs.go:282] 0 containers: []
	W1212 01:37:23.914561  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:23.914569  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:23.914626  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:23.940203  291455 cri.go:89] found id: ""
	I1212 01:37:23.940275  291455 logs.go:282] 0 containers: []
	W1212 01:37:23.940292  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:23.940299  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:23.940364  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:23.968920  291455 cri.go:89] found id: ""
	I1212 01:37:23.968944  291455 logs.go:282] 0 containers: []
	W1212 01:37:23.968952  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:23.968959  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:23.969016  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:23.993883  291455 cri.go:89] found id: ""
	I1212 01:37:23.993910  291455 logs.go:282] 0 containers: []
	W1212 01:37:23.993919  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:23.993925  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:23.993985  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:24.019876  291455 cri.go:89] found id: ""
	I1212 01:37:24.019901  291455 logs.go:282] 0 containers: []
	W1212 01:37:24.019909  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:24.019922  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:24.019935  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:24.052560  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:24.052586  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:24.107812  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:24.107847  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:24.121870  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:24.121902  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:24.193432  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:24.184434    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:24.184974    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:24.185943    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:24.187426    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:24.187845    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:24.184434    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:24.184974    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:24.185943    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:24.187426    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:24.187845    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:24.193458  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:24.193471  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1212 01:37:26.536664  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:29.036444  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:26.720901  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:26.732114  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:26.732194  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:26.759421  291455 cri.go:89] found id: ""
	I1212 01:37:26.759443  291455 logs.go:282] 0 containers: []
	W1212 01:37:26.759451  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:26.759458  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:26.759523  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:26.801227  291455 cri.go:89] found id: ""
	I1212 01:37:26.801252  291455 logs.go:282] 0 containers: []
	W1212 01:37:26.801261  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:26.801290  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:26.801371  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:26.836143  291455 cri.go:89] found id: ""
	I1212 01:37:26.836168  291455 logs.go:282] 0 containers: []
	W1212 01:37:26.836178  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:26.836184  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:26.836276  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:26.880334  291455 cri.go:89] found id: ""
	I1212 01:37:26.880373  291455 logs.go:282] 0 containers: []
	W1212 01:37:26.880382  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:26.880388  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:26.880477  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:26.915704  291455 cri.go:89] found id: ""
	I1212 01:37:26.915769  291455 logs.go:282] 0 containers: []
	W1212 01:37:26.915786  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:26.915793  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:26.915864  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:26.943219  291455 cri.go:89] found id: ""
	I1212 01:37:26.943252  291455 logs.go:282] 0 containers: []
	W1212 01:37:26.943262  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:26.943269  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:26.943350  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:26.968790  291455 cri.go:89] found id: ""
	I1212 01:37:26.968867  291455 logs.go:282] 0 containers: []
	W1212 01:37:26.968882  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:26.968889  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:26.968946  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:26.993867  291455 cri.go:89] found id: ""
	I1212 01:37:26.993892  291455 logs.go:282] 0 containers: []
	W1212 01:37:26.993908  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:26.993918  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:26.993929  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:27.025483  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:27.025547  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:27.081672  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:27.081704  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:27.095698  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:27.095724  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:27.161161  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:27.151369    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:27.152034    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:27.153696    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:27.156078    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:27.157312    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:27.151369    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:27.152034    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:27.153696    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:27.156078    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:27.157312    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:27.161189  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:27.161202  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:29.686768  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:29.699055  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:29.699131  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:29.725025  291455 cri.go:89] found id: ""
	I1212 01:37:29.725050  291455 logs.go:282] 0 containers: []
	W1212 01:37:29.725059  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:29.725065  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:29.725140  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:29.749378  291455 cri.go:89] found id: ""
	I1212 01:37:29.749401  291455 logs.go:282] 0 containers: []
	W1212 01:37:29.749410  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:29.749416  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:29.749481  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:29.773953  291455 cri.go:89] found id: ""
	I1212 01:37:29.773978  291455 logs.go:282] 0 containers: []
	W1212 01:37:29.773987  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:29.773993  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:29.774052  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:29.831695  291455 cri.go:89] found id: ""
	I1212 01:37:29.831723  291455 logs.go:282] 0 containers: []
	W1212 01:37:29.831732  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:29.831738  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:29.831794  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:29.881376  291455 cri.go:89] found id: ""
	I1212 01:37:29.881401  291455 logs.go:282] 0 containers: []
	W1212 01:37:29.881412  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:29.881418  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:29.881477  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:29.905463  291455 cri.go:89] found id: ""
	I1212 01:37:29.905497  291455 logs.go:282] 0 containers: []
	W1212 01:37:29.905506  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:29.905530  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:29.905618  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:29.929393  291455 cri.go:89] found id: ""
	I1212 01:37:29.929427  291455 logs.go:282] 0 containers: []
	W1212 01:37:29.929436  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:29.929442  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:29.929507  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:29.956794  291455 cri.go:89] found id: ""
	I1212 01:37:29.956820  291455 logs.go:282] 0 containers: []
	W1212 01:37:29.956829  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:29.956839  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:29.956850  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:29.981845  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:29.981878  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:30.037712  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:30.037751  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:30.096286  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:30.096320  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:30.111120  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:30.111160  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:30.180653  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:30.171653    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:30.172384    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:30.174167    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:30.174765    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:30.176527    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:30.171653    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:30.172384    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:30.174167    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:30.174765    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:30.176527    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1212 01:37:31.535946  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:33.536464  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:32.681768  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:32.693283  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:32.693354  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:32.720606  291455 cri.go:89] found id: ""
	I1212 01:37:32.720629  291455 logs.go:282] 0 containers: []
	W1212 01:37:32.720638  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:32.720644  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:32.720703  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:32.747145  291455 cri.go:89] found id: ""
	I1212 01:37:32.747167  291455 logs.go:282] 0 containers: []
	W1212 01:37:32.747177  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:32.747185  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:32.747243  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:32.772037  291455 cri.go:89] found id: ""
	I1212 01:37:32.772061  291455 logs.go:282] 0 containers: []
	W1212 01:37:32.772070  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:32.772076  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:32.772134  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:32.862885  291455 cri.go:89] found id: ""
	I1212 01:37:32.862910  291455 logs.go:282] 0 containers: []
	W1212 01:37:32.862919  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:32.862925  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:32.862983  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:32.888016  291455 cri.go:89] found id: ""
	I1212 01:37:32.888038  291455 logs.go:282] 0 containers: []
	W1212 01:37:32.888049  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:32.888055  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:32.888115  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:32.912450  291455 cri.go:89] found id: ""
	I1212 01:37:32.912472  291455 logs.go:282] 0 containers: []
	W1212 01:37:32.912481  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:32.912487  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:32.912544  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:32.935759  291455 cri.go:89] found id: ""
	I1212 01:37:32.935781  291455 logs.go:282] 0 containers: []
	W1212 01:37:32.935790  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:32.935797  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:32.935855  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:32.963827  291455 cri.go:89] found id: ""
	I1212 01:37:32.963850  291455 logs.go:282] 0 containers: []
	W1212 01:37:32.963858  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:32.963869  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:32.963880  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:32.988758  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:32.988788  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:33.021942  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:33.021973  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:33.078907  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:33.078940  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:33.094242  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:33.094270  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:33.157981  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:33.149433    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:33.150328    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:33.151907    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:33.152360    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:33.153844    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:33.149433    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:33.150328    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:33.151907    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:33.152360    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:33.153844    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:35.659737  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:35.672022  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:35.672098  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:35.701308  291455 cri.go:89] found id: ""
	I1212 01:37:35.701334  291455 logs.go:282] 0 containers: []
	W1212 01:37:35.701343  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:35.701349  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:35.701408  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:35.726385  291455 cri.go:89] found id: ""
	I1212 01:37:35.726409  291455 logs.go:282] 0 containers: []
	W1212 01:37:35.726418  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:35.726424  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:35.726482  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:35.751557  291455 cri.go:89] found id: ""
	I1212 01:37:35.751593  291455 logs.go:282] 0 containers: []
	W1212 01:37:35.751604  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:35.751610  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:35.751679  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:35.776892  291455 cri.go:89] found id: ""
	I1212 01:37:35.776956  291455 logs.go:282] 0 containers: []
	W1212 01:37:35.776971  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:35.776982  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:35.777044  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:35.824076  291455 cri.go:89] found id: ""
	I1212 01:37:35.824107  291455 logs.go:282] 0 containers: []
	W1212 01:37:35.824116  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:35.824122  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:35.824179  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:35.880084  291455 cri.go:89] found id: ""
	I1212 01:37:35.880107  291455 logs.go:282] 0 containers: []
	W1212 01:37:35.880115  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:35.880122  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:35.880192  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:35.907066  291455 cri.go:89] found id: ""
	I1212 01:37:35.907091  291455 logs.go:282] 0 containers: []
	W1212 01:37:35.907099  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:35.907105  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:35.907166  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:35.936636  291455 cri.go:89] found id: ""
	I1212 01:37:35.936713  291455 logs.go:282] 0 containers: []
	W1212 01:37:35.936729  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:35.936739  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:35.936750  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:35.993085  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:35.993119  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:36.007767  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:36.007856  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:36.076959  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:36.068314    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:36.068888    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:36.070632    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:36.071390    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:36.072929    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:36.068314    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:36.068888    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:36.070632    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:36.071390    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:36.072929    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:36.076984  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:36.076997  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:36.103429  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:36.103463  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:37:36.036277  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:38.536154  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:38.632890  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:38.643831  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:38.643909  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:38.671085  291455 cri.go:89] found id: ""
	I1212 01:37:38.671108  291455 logs.go:282] 0 containers: []
	W1212 01:37:38.671116  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:38.671122  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:38.671182  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:38.694933  291455 cri.go:89] found id: ""
	I1212 01:37:38.694958  291455 logs.go:282] 0 containers: []
	W1212 01:37:38.694966  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:38.694972  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:38.695070  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:38.723033  291455 cri.go:89] found id: ""
	I1212 01:37:38.723060  291455 logs.go:282] 0 containers: []
	W1212 01:37:38.723069  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:38.723075  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:38.723135  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:38.748068  291455 cri.go:89] found id: ""
	I1212 01:37:38.748093  291455 logs.go:282] 0 containers: []
	W1212 01:37:38.748102  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:38.748109  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:38.748169  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:38.778336  291455 cri.go:89] found id: ""
	I1212 01:37:38.778362  291455 logs.go:282] 0 containers: []
	W1212 01:37:38.778371  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:38.778377  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:38.778438  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:38.824425  291455 cri.go:89] found id: ""
	I1212 01:37:38.824452  291455 logs.go:282] 0 containers: []
	W1212 01:37:38.824461  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:38.824468  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:38.824526  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:38.869581  291455 cri.go:89] found id: ""
	I1212 01:37:38.869607  291455 logs.go:282] 0 containers: []
	W1212 01:37:38.869616  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:38.869623  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:38.869684  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:38.898375  291455 cri.go:89] found id: ""
	I1212 01:37:38.898401  291455 logs.go:282] 0 containers: []
	W1212 01:37:38.898411  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:38.898420  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:38.898431  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:38.924559  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:38.924594  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:38.954848  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:38.954884  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:39.010528  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:39.010564  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:39.024383  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:39.024412  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:39.090716  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:39.082311    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:39.082890    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:39.084642    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:39.085084    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:39.086585    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:39.082311    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:39.082890    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:39.084642    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:39.085084    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:39.086585    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1212 01:37:40.536718  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:43.036535  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:45.036776  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:41.591539  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:41.602064  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:41.602135  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:41.626512  291455 cri.go:89] found id: ""
	I1212 01:37:41.626584  291455 logs.go:282] 0 containers: []
	W1212 01:37:41.626609  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:41.626629  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:41.626713  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:41.651218  291455 cri.go:89] found id: ""
	I1212 01:37:41.651294  291455 logs.go:282] 0 containers: []
	W1212 01:37:41.651317  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:41.651339  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:41.651429  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:41.676032  291455 cri.go:89] found id: ""
	I1212 01:37:41.676055  291455 logs.go:282] 0 containers: []
	W1212 01:37:41.676064  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:41.676070  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:41.676144  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:41.700472  291455 cri.go:89] found id: ""
	I1212 01:37:41.700495  291455 logs.go:282] 0 containers: []
	W1212 01:37:41.700509  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:41.700516  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:41.700573  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:41.728292  291455 cri.go:89] found id: ""
	I1212 01:37:41.728317  291455 logs.go:282] 0 containers: []
	W1212 01:37:41.728326  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:41.728332  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:41.728413  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:41.752458  291455 cri.go:89] found id: ""
	I1212 01:37:41.752496  291455 logs.go:282] 0 containers: []
	W1212 01:37:41.752508  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:41.752515  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:41.752687  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:41.778677  291455 cri.go:89] found id: ""
	I1212 01:37:41.778703  291455 logs.go:282] 0 containers: []
	W1212 01:37:41.778711  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:41.778717  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:41.778802  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:41.831103  291455 cri.go:89] found id: ""
	I1212 01:37:41.831129  291455 logs.go:282] 0 containers: []
	W1212 01:37:41.831138  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:41.831147  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:41.831158  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:41.922931  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:41.914201    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:41.914946    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:41.916560    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:41.917145    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:41.918787    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:41.914201    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:41.914946    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:41.916560    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:41.917145    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:41.918787    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:41.922954  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:41.922966  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:41.948574  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:41.948606  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:41.976883  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:41.976910  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:42.031740  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:42.031774  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:44.547156  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:44.557779  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:44.557852  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:44.585516  291455 cri.go:89] found id: ""
	I1212 01:37:44.585539  291455 logs.go:282] 0 containers: []
	W1212 01:37:44.585547  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:44.585554  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:44.585614  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:44.610080  291455 cri.go:89] found id: ""
	I1212 01:37:44.610146  291455 logs.go:282] 0 containers: []
	W1212 01:37:44.610170  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:44.610188  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:44.610282  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:44.634333  291455 cri.go:89] found id: ""
	I1212 01:37:44.634403  291455 logs.go:282] 0 containers: []
	W1212 01:37:44.634428  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:44.634449  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:44.634538  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:44.659415  291455 cri.go:89] found id: ""
	I1212 01:37:44.659441  291455 logs.go:282] 0 containers: []
	W1212 01:37:44.659450  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:44.659457  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:44.659518  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:44.688713  291455 cri.go:89] found id: ""
	I1212 01:37:44.688738  291455 logs.go:282] 0 containers: []
	W1212 01:37:44.688747  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:44.688753  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:44.688813  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:44.713219  291455 cri.go:89] found id: ""
	I1212 01:37:44.713245  291455 logs.go:282] 0 containers: []
	W1212 01:37:44.713262  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:44.713270  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:44.713334  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:44.736447  291455 cri.go:89] found id: ""
	I1212 01:37:44.736472  291455 logs.go:282] 0 containers: []
	W1212 01:37:44.736480  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:44.736486  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:44.736562  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:44.762258  291455 cri.go:89] found id: ""
	I1212 01:37:44.762283  291455 logs.go:282] 0 containers: []
	W1212 01:37:44.762292  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:44.762324  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:44.762341  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:44.839027  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:44.839065  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:44.856616  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:44.856643  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:44.936247  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:44.928242    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:44.928784    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:44.930267    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:44.930803    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:44.932347    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:44.928242    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:44.928784    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:44.930267    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:44.930803    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:44.932347    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:44.936278  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:44.936291  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:44.961626  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:44.961659  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:37:47.536481  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:49.536708  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:47.490976  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:47.501776  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:47.501852  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:47.532240  291455 cri.go:89] found id: ""
	I1212 01:37:47.532263  291455 logs.go:282] 0 containers: []
	W1212 01:37:47.532271  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:47.532276  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:47.532336  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:47.556453  291455 cri.go:89] found id: ""
	I1212 01:37:47.556475  291455 logs.go:282] 0 containers: []
	W1212 01:37:47.556484  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:47.556490  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:47.556551  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:47.580605  291455 cri.go:89] found id: ""
	I1212 01:37:47.580628  291455 logs.go:282] 0 containers: []
	W1212 01:37:47.580637  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:47.580643  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:47.580709  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:47.605106  291455 cri.go:89] found id: ""
	I1212 01:37:47.605130  291455 logs.go:282] 0 containers: []
	W1212 01:37:47.605139  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:47.605145  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:47.605224  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:47.630587  291455 cri.go:89] found id: ""
	I1212 01:37:47.630613  291455 logs.go:282] 0 containers: []
	W1212 01:37:47.630622  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:47.630629  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:47.630733  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:47.656391  291455 cri.go:89] found id: ""
	I1212 01:37:47.656416  291455 logs.go:282] 0 containers: []
	W1212 01:37:47.656424  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:47.656431  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:47.656489  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:47.680787  291455 cri.go:89] found id: ""
	I1212 01:37:47.680817  291455 logs.go:282] 0 containers: []
	W1212 01:37:47.680826  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:47.680832  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:47.680913  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:47.706371  291455 cri.go:89] found id: ""
	I1212 01:37:47.706396  291455 logs.go:282] 0 containers: []
	W1212 01:37:47.706405  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:47.706414  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:47.706458  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:47.763648  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:47.763687  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:47.777355  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:47.777383  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:47.899204  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:47.891161    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:47.891855    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:47.893228    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:47.893728    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:47.895403    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:47.891161    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:47.891855    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:47.893228    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:47.893728    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:47.895403    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:47.899226  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:47.899238  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:47.924220  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:47.924256  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:50.458301  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:50.468856  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:50.468926  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:50.493349  291455 cri.go:89] found id: ""
	I1212 01:37:50.493374  291455 logs.go:282] 0 containers: []
	W1212 01:37:50.493382  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:50.493388  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:50.493445  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:50.517926  291455 cri.go:89] found id: ""
	I1212 01:37:50.517951  291455 logs.go:282] 0 containers: []
	W1212 01:37:50.517960  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:50.517966  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:50.518026  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:50.546779  291455 cri.go:89] found id: ""
	I1212 01:37:50.546805  291455 logs.go:282] 0 containers: []
	W1212 01:37:50.546814  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:50.546819  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:50.546877  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:50.572059  291455 cri.go:89] found id: ""
	I1212 01:37:50.572086  291455 logs.go:282] 0 containers: []
	W1212 01:37:50.572102  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:50.572110  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:50.572173  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:50.596562  291455 cri.go:89] found id: ""
	I1212 01:37:50.596585  291455 logs.go:282] 0 containers: []
	W1212 01:37:50.596594  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:50.596601  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:50.596669  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:50.621102  291455 cri.go:89] found id: ""
	I1212 01:37:50.621124  291455 logs.go:282] 0 containers: []
	W1212 01:37:50.621132  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:50.621138  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:50.621196  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:50.645424  291455 cri.go:89] found id: ""
	I1212 01:37:50.645445  291455 logs.go:282] 0 containers: []
	W1212 01:37:50.645454  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:50.645461  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:50.645521  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:50.670456  291455 cri.go:89] found id: ""
	I1212 01:37:50.670479  291455 logs.go:282] 0 containers: []
	W1212 01:37:50.670487  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:50.670497  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:50.670508  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:50.726487  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:50.726519  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:50.740149  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:50.740178  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:50.846147  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:50.836239    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:50.837070    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:50.839024    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:50.839387    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:50.840598    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:50.836239    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:50.837070    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:50.839024    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:50.839387    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:50.840598    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:50.846174  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:50.846188  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:50.882509  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:50.882583  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:37:52.036566  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:54.036621  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:53.411213  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:53.421355  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:53.421422  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:53.444104  291455 cri.go:89] found id: ""
	I1212 01:37:53.444130  291455 logs.go:282] 0 containers: []
	W1212 01:37:53.444139  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:53.444146  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:53.444205  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:53.467938  291455 cri.go:89] found id: ""
	I1212 01:37:53.467963  291455 logs.go:282] 0 containers: []
	W1212 01:37:53.467972  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:53.467979  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:53.468038  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:53.492082  291455 cri.go:89] found id: ""
	I1212 01:37:53.492106  291455 logs.go:282] 0 containers: []
	W1212 01:37:53.492115  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:53.492122  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:53.492180  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:53.516011  291455 cri.go:89] found id: ""
	I1212 01:37:53.516040  291455 logs.go:282] 0 containers: []
	W1212 01:37:53.516049  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:53.516056  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:53.516115  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:53.543513  291455 cri.go:89] found id: ""
	I1212 01:37:53.543550  291455 logs.go:282] 0 containers: []
	W1212 01:37:53.543559  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:53.543565  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:53.543707  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:53.568681  291455 cri.go:89] found id: ""
	I1212 01:37:53.568705  291455 logs.go:282] 0 containers: []
	W1212 01:37:53.568713  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:53.568720  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:53.568797  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:53.593562  291455 cri.go:89] found id: ""
	I1212 01:37:53.593587  291455 logs.go:282] 0 containers: []
	W1212 01:37:53.593596  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:53.593602  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:53.593676  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:53.617634  291455 cri.go:89] found id: ""
	I1212 01:37:53.617658  291455 logs.go:282] 0 containers: []
	W1212 01:37:53.617667  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:53.617677  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:53.617691  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:53.672956  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:53.672991  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:53.686739  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:53.686767  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:53.753435  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:53.745274    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:53.746109    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:53.747777    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:53.748302    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:53.749767    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:53.745274    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:53.746109    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:53.747777    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:53.748302    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:53.749767    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:53.753456  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:53.753470  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:53.785303  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:53.785347  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:37:56.536427  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:59.036479  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:56.343327  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:56.353619  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:56.353686  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:56.377008  291455 cri.go:89] found id: ""
	I1212 01:37:56.377032  291455 logs.go:282] 0 containers: []
	W1212 01:37:56.377040  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:56.377047  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:56.377103  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:56.403572  291455 cri.go:89] found id: ""
	I1212 01:37:56.403599  291455 logs.go:282] 0 containers: []
	W1212 01:37:56.403607  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:56.403614  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:56.403677  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:56.427234  291455 cri.go:89] found id: ""
	I1212 01:37:56.427256  291455 logs.go:282] 0 containers: []
	W1212 01:37:56.427266  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:56.427272  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:56.427329  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:56.450300  291455 cri.go:89] found id: ""
	I1212 01:37:56.450325  291455 logs.go:282] 0 containers: []
	W1212 01:37:56.450334  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:56.450340  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:56.450399  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:56.478269  291455 cri.go:89] found id: ""
	I1212 01:37:56.478293  291455 logs.go:282] 0 containers: []
	W1212 01:37:56.478302  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:56.478308  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:56.478402  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:56.502839  291455 cri.go:89] found id: ""
	I1212 01:37:56.502863  291455 logs.go:282] 0 containers: []
	W1212 01:37:56.502872  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:56.502879  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:56.502939  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:56.528770  291455 cri.go:89] found id: ""
	I1212 01:37:56.528796  291455 logs.go:282] 0 containers: []
	W1212 01:37:56.528804  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:56.528810  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:56.528886  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:56.552625  291455 cri.go:89] found id: ""
	I1212 01:37:56.552687  291455 logs.go:282] 0 containers: []
	W1212 01:37:56.552701  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:56.552710  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:56.552722  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:56.582901  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:56.582929  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:56.638758  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:56.638790  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:56.652337  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:56.652364  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:56.718815  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:56.710468    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:56.711245    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:56.712862    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:56.713372    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:56.714933    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:56.710468    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:56.711245    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:56.712862    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:56.713372    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:56.714933    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:56.718853  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:56.718866  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:59.245105  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:59.255232  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:59.255300  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:59.280996  291455 cri.go:89] found id: ""
	I1212 01:37:59.281018  291455 logs.go:282] 0 containers: []
	W1212 01:37:59.281027  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:59.281033  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:59.281089  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:59.306870  291455 cri.go:89] found id: ""
	I1212 01:37:59.306893  291455 logs.go:282] 0 containers: []
	W1212 01:37:59.306901  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:59.306908  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:59.306967  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:59.332982  291455 cri.go:89] found id: ""
	I1212 01:37:59.333008  291455 logs.go:282] 0 containers: []
	W1212 01:37:59.333017  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:59.333022  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:59.333128  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:59.360799  291455 cri.go:89] found id: ""
	I1212 01:37:59.360824  291455 logs.go:282] 0 containers: []
	W1212 01:37:59.360833  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:59.360839  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:59.360897  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:59.383773  291455 cri.go:89] found id: ""
	I1212 01:37:59.383836  291455 logs.go:282] 0 containers: []
	W1212 01:37:59.383851  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:59.383858  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:59.383916  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:59.411933  291455 cri.go:89] found id: ""
	I1212 01:37:59.411958  291455 logs.go:282] 0 containers: []
	W1212 01:37:59.411966  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:59.411973  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:59.412073  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:59.437061  291455 cri.go:89] found id: ""
	I1212 01:37:59.437087  291455 logs.go:282] 0 containers: []
	W1212 01:37:59.437095  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:59.437102  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:59.437182  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:59.461853  291455 cri.go:89] found id: ""
	I1212 01:37:59.461877  291455 logs.go:282] 0 containers: []
	W1212 01:37:59.461886  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:59.461895  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:59.461907  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:59.493084  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:59.493111  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:59.549198  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:59.549229  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:59.562644  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:59.562674  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:59.627349  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:59.619195    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:59.619835    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:59.621508    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:59.622053    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:59.623671    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:59.619195    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:59.619835    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:59.621508    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:59.622053    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:59.623671    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:59.627373  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:59.627388  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1212 01:38:01.535866  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:03.536428  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:38:02.153040  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:02.163386  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:02.163465  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:02.188022  291455 cri.go:89] found id: ""
	I1212 01:38:02.188050  291455 logs.go:282] 0 containers: []
	W1212 01:38:02.188058  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:02.188064  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:02.188126  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:02.212051  291455 cri.go:89] found id: ""
	I1212 01:38:02.212088  291455 logs.go:282] 0 containers: []
	W1212 01:38:02.212097  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:02.212104  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:02.212163  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:02.236784  291455 cri.go:89] found id: ""
	I1212 01:38:02.236815  291455 logs.go:282] 0 containers: []
	W1212 01:38:02.236824  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:02.236831  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:02.236895  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:02.262277  291455 cri.go:89] found id: ""
	I1212 01:38:02.262301  291455 logs.go:282] 0 containers: []
	W1212 01:38:02.262310  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:02.262316  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:02.262375  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:02.286641  291455 cri.go:89] found id: ""
	I1212 01:38:02.286665  291455 logs.go:282] 0 containers: []
	W1212 01:38:02.286674  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:02.286680  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:02.286739  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:02.315696  291455 cri.go:89] found id: ""
	I1212 01:38:02.315721  291455 logs.go:282] 0 containers: []
	W1212 01:38:02.315729  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:02.315736  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:02.315796  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:02.341469  291455 cri.go:89] found id: ""
	I1212 01:38:02.341495  291455 logs.go:282] 0 containers: []
	W1212 01:38:02.341504  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:02.341511  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:02.341578  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:02.375601  291455 cri.go:89] found id: ""
	I1212 01:38:02.375626  291455 logs.go:282] 0 containers: []
	W1212 01:38:02.375634  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:02.375644  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:02.375656  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:02.388949  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:02.388978  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:02.458902  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:02.448758    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:02.449311    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:02.452630    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:02.453261    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:02.454829    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:02.448758    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:02.449311    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:02.452630    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:02.453261    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:02.454829    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:02.458924  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:02.458936  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:02.485359  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:02.485393  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:02.512676  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:02.512746  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:05.069728  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:05.084872  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:05.084975  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:05.130414  291455 cri.go:89] found id: ""
	I1212 01:38:05.130441  291455 logs.go:282] 0 containers: []
	W1212 01:38:05.130450  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:05.130457  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:05.130524  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:05.156129  291455 cri.go:89] found id: ""
	I1212 01:38:05.156154  291455 logs.go:282] 0 containers: []
	W1212 01:38:05.156163  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:05.156169  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:05.156230  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:05.182033  291455 cri.go:89] found id: ""
	I1212 01:38:05.182056  291455 logs.go:282] 0 containers: []
	W1212 01:38:05.182065  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:05.182071  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:05.182131  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:05.206795  291455 cri.go:89] found id: ""
	I1212 01:38:05.206821  291455 logs.go:282] 0 containers: []
	W1212 01:38:05.206830  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:05.206842  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:05.206903  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:05.231972  291455 cri.go:89] found id: ""
	I1212 01:38:05.231998  291455 logs.go:282] 0 containers: []
	W1212 01:38:05.232008  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:05.232014  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:05.232075  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:05.257476  291455 cri.go:89] found id: ""
	I1212 01:38:05.257501  291455 logs.go:282] 0 containers: []
	W1212 01:38:05.257509  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:05.257515  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:05.257576  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:05.282557  291455 cri.go:89] found id: ""
	I1212 01:38:05.282581  291455 logs.go:282] 0 containers: []
	W1212 01:38:05.282590  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:05.282595  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:05.282655  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:05.306866  291455 cri.go:89] found id: ""
	I1212 01:38:05.306891  291455 logs.go:282] 0 containers: []
	W1212 01:38:05.306899  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:05.306908  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:05.306919  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:05.363028  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:05.363073  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:05.376693  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:05.376722  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:05.445040  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:05.435873    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:05.436618    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:05.438470    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:05.439137    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:05.440737    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:05.435873    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:05.436618    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:05.438470    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:05.439137    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:05.440737    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:05.445059  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:05.445071  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:05.470893  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:05.470933  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:38:05.536804  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:08.035822  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:10.036632  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:38:08.000563  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:08.015628  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:08.015701  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:08.081620  291455 cri.go:89] found id: ""
	I1212 01:38:08.081643  291455 logs.go:282] 0 containers: []
	W1212 01:38:08.081652  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:08.081661  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:08.081736  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:08.129116  291455 cri.go:89] found id: ""
	I1212 01:38:08.129137  291455 logs.go:282] 0 containers: []
	W1212 01:38:08.129146  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:08.129152  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:08.129208  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:08.154760  291455 cri.go:89] found id: ""
	I1212 01:38:08.154781  291455 logs.go:282] 0 containers: []
	W1212 01:38:08.154790  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:08.154797  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:08.154853  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:08.181948  291455 cri.go:89] found id: ""
	I1212 01:38:08.181971  291455 logs.go:282] 0 containers: []
	W1212 01:38:08.181981  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:08.181988  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:08.182052  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:08.206310  291455 cri.go:89] found id: ""
	I1212 01:38:08.206335  291455 logs.go:282] 0 containers: []
	W1212 01:38:08.206345  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:08.206351  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:08.206413  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:08.230579  291455 cri.go:89] found id: ""
	I1212 01:38:08.230606  291455 logs.go:282] 0 containers: []
	W1212 01:38:08.230615  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:08.230624  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:08.230690  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:08.259888  291455 cri.go:89] found id: ""
	I1212 01:38:08.259913  291455 logs.go:282] 0 containers: []
	W1212 01:38:08.259922  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:08.259928  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:08.260006  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:08.284903  291455 cri.go:89] found id: ""
	I1212 01:38:08.284927  291455 logs.go:282] 0 containers: []
	W1212 01:38:08.284936  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:08.284945  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:08.284957  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:08.341529  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:08.341565  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:08.355353  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:08.355394  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:08.418766  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:08.409488    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:08.410375    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:08.412414    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:08.413281    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:08.414948    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:08.409488    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:08.410375    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:08.412414    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:08.413281    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:08.414948    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:08.418789  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:08.418801  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:08.444616  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:08.444654  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:10.972656  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:10.983126  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:10.983206  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:11.011272  291455 cri.go:89] found id: ""
	I1212 01:38:11.011296  291455 logs.go:282] 0 containers: []
	W1212 01:38:11.011305  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:11.011311  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:11.011372  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:11.061173  291455 cri.go:89] found id: ""
	I1212 01:38:11.061199  291455 logs.go:282] 0 containers: []
	W1212 01:38:11.061208  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:11.061214  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:11.061273  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:11.124035  291455 cri.go:89] found id: ""
	I1212 01:38:11.124061  291455 logs.go:282] 0 containers: []
	W1212 01:38:11.124070  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:11.124077  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:11.124144  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:11.152861  291455 cri.go:89] found id: ""
	I1212 01:38:11.152900  291455 logs.go:282] 0 containers: []
	W1212 01:38:11.152910  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:11.152932  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:11.153005  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:11.178248  291455 cri.go:89] found id: ""
	I1212 01:38:11.178270  291455 logs.go:282] 0 containers: []
	W1212 01:38:11.178279  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:11.178285  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:11.178355  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:11.213235  291455 cri.go:89] found id: ""
	I1212 01:38:11.213260  291455 logs.go:282] 0 containers: []
	W1212 01:38:11.213269  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:11.213275  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:11.213337  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:11.238933  291455 cri.go:89] found id: ""
	I1212 01:38:11.238960  291455 logs.go:282] 0 containers: []
	W1212 01:38:11.238969  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:11.238975  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:11.239060  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:11.264115  291455 cri.go:89] found id: ""
	I1212 01:38:11.264137  291455 logs.go:282] 0 containers: []
	W1212 01:38:11.264146  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:11.264155  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:11.264167  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:11.320523  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:11.320561  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:11.334027  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:11.334059  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:12.036672  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:14.536663  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:11.411780  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:11.403056    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:11.403575    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:11.405319    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:11.405839    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:11.407505    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:11.403056    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:11.403575    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:11.405319    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:11.405839    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:11.407505    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:11.411803  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:11.411815  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:11.437459  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:11.437498  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:13.966371  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:13.976737  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:13.976807  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:14.002889  291455 cri.go:89] found id: ""
	I1212 01:38:14.002926  291455 logs.go:282] 0 containers: []
	W1212 01:38:14.002936  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:14.002943  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:14.003051  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:14.028607  291455 cri.go:89] found id: ""
	I1212 01:38:14.028632  291455 logs.go:282] 0 containers: []
	W1212 01:38:14.028640  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:14.028647  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:14.028707  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:14.068137  291455 cri.go:89] found id: ""
	I1212 01:38:14.068159  291455 logs.go:282] 0 containers: []
	W1212 01:38:14.068168  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:14.068174  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:14.068236  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:14.114047  291455 cri.go:89] found id: ""
	I1212 01:38:14.114068  291455 logs.go:282] 0 containers: []
	W1212 01:38:14.114077  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:14.114083  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:14.114142  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:14.143724  291455 cri.go:89] found id: ""
	I1212 01:38:14.143751  291455 logs.go:282] 0 containers: []
	W1212 01:38:14.143760  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:14.143766  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:14.143837  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:14.172821  291455 cri.go:89] found id: ""
	I1212 01:38:14.172844  291455 logs.go:282] 0 containers: []
	W1212 01:38:14.172853  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:14.172860  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:14.172922  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:14.201404  291455 cri.go:89] found id: ""
	I1212 01:38:14.201428  291455 logs.go:282] 0 containers: []
	W1212 01:38:14.201437  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:14.201443  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:14.201502  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:14.225421  291455 cri.go:89] found id: ""
	I1212 01:38:14.225445  291455 logs.go:282] 0 containers: []
	W1212 01:38:14.225454  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:14.225464  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:14.225475  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:14.281620  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:14.281655  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:14.295270  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:14.295297  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:14.361558  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:14.353174    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:14.353959    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:14.355541    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:14.356054    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:14.357617    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:14.353174    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:14.353959    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:14.355541    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:14.356054    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:14.357617    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:14.361580  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:14.361594  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:14.387622  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:14.387657  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:38:17.036493  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:19.535924  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:38:16.917930  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:16.928677  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:16.928747  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:16.956782  291455 cri.go:89] found id: ""
	I1212 01:38:16.956805  291455 logs.go:282] 0 containers: []
	W1212 01:38:16.956815  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:16.956821  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:16.956882  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:16.982223  291455 cri.go:89] found id: ""
	I1212 01:38:16.982255  291455 logs.go:282] 0 containers: []
	W1212 01:38:16.982264  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:16.982270  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:16.982337  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:17.011072  291455 cri.go:89] found id: ""
	I1212 01:38:17.011097  291455 logs.go:282] 0 containers: []
	W1212 01:38:17.011107  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:17.011114  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:17.011191  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:17.052070  291455 cri.go:89] found id: ""
	I1212 01:38:17.052096  291455 logs.go:282] 0 containers: []
	W1212 01:38:17.052104  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:17.052110  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:17.052177  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:17.084107  291455 cri.go:89] found id: ""
	I1212 01:38:17.084141  291455 logs.go:282] 0 containers: []
	W1212 01:38:17.084151  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:17.084157  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:17.084224  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:17.122692  291455 cri.go:89] found id: ""
	I1212 01:38:17.122766  291455 logs.go:282] 0 containers: []
	W1212 01:38:17.122797  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:17.122817  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:17.122923  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:17.156006  291455 cri.go:89] found id: ""
	I1212 01:38:17.156081  291455 logs.go:282] 0 containers: []
	W1212 01:38:17.156109  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:17.156129  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:17.156241  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:17.182169  291455 cri.go:89] found id: ""
	I1212 01:38:17.182240  291455 logs.go:282] 0 containers: []
	W1212 01:38:17.182264  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:17.182285  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:17.182335  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:17.237895  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:17.237933  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:17.252584  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:17.252654  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:17.321480  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:17.312815    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:17.313531    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:17.315204    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:17.315765    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:17.317270    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:17.312815    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:17.313531    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:17.315204    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:17.315765    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:17.317270    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:17.321502  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:17.321515  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:17.347596  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:17.347629  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:19.879967  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:19.890396  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:19.890464  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:19.918925  291455 cri.go:89] found id: ""
	I1212 01:38:19.918949  291455 logs.go:282] 0 containers: []
	W1212 01:38:19.918958  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:19.918964  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:19.919053  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:19.943584  291455 cri.go:89] found id: ""
	I1212 01:38:19.943610  291455 logs.go:282] 0 containers: []
	W1212 01:38:19.943619  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:19.943626  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:19.943681  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:19.969048  291455 cri.go:89] found id: ""
	I1212 01:38:19.969068  291455 logs.go:282] 0 containers: []
	W1212 01:38:19.969077  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:19.969083  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:19.969144  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:20.003773  291455 cri.go:89] found id: ""
	I1212 01:38:20.003795  291455 logs.go:282] 0 containers: []
	W1212 01:38:20.003804  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:20.003821  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:20.003894  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:20.066569  291455 cri.go:89] found id: ""
	I1212 01:38:20.066593  291455 logs.go:282] 0 containers: []
	W1212 01:38:20.066602  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:20.066608  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:20.066672  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:20.123787  291455 cri.go:89] found id: ""
	I1212 01:38:20.123818  291455 logs.go:282] 0 containers: []
	W1212 01:38:20.123828  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:20.123835  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:20.123902  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:20.148942  291455 cri.go:89] found id: ""
	I1212 01:38:20.148967  291455 logs.go:282] 0 containers: []
	W1212 01:38:20.148976  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:20.148982  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:20.149040  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:20.174974  291455 cri.go:89] found id: ""
	I1212 01:38:20.175019  291455 logs.go:282] 0 containers: []
	W1212 01:38:20.175028  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:20.175037  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:20.175049  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:20.188705  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:20.188734  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:20.257975  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:20.247998    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:20.248900    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:20.250615    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:20.251381    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:20.253188    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:20.247998    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:20.248900    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:20.250615    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:20.251381    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:20.253188    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:20.258004  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:20.258018  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:20.283558  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:20.283589  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:20.313552  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:20.313580  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1212 01:38:21.535995  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:23.536531  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:38:22.869782  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:22.880016  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:22.880091  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:22.903866  291455 cri.go:89] found id: ""
	I1212 01:38:22.903891  291455 logs.go:282] 0 containers: []
	W1212 01:38:22.903901  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:22.903908  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:22.903971  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:22.927721  291455 cri.go:89] found id: ""
	I1212 01:38:22.927744  291455 logs.go:282] 0 containers: []
	W1212 01:38:22.927752  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:22.927759  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:22.927816  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:22.952423  291455 cri.go:89] found id: ""
	I1212 01:38:22.952447  291455 logs.go:282] 0 containers: []
	W1212 01:38:22.952455  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:22.952461  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:22.952517  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:22.976598  291455 cri.go:89] found id: ""
	I1212 01:38:22.976620  291455 logs.go:282] 0 containers: []
	W1212 01:38:22.976628  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:22.976634  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:22.976691  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:23.003885  291455 cri.go:89] found id: ""
	I1212 01:38:23.003919  291455 logs.go:282] 0 containers: []
	W1212 01:38:23.003939  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:23.003947  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:23.004046  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:23.033013  291455 cri.go:89] found id: ""
	I1212 01:38:23.033036  291455 logs.go:282] 0 containers: []
	W1212 01:38:23.033045  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:23.033052  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:23.033112  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:23.092706  291455 cri.go:89] found id: ""
	I1212 01:38:23.092730  291455 logs.go:282] 0 containers: []
	W1212 01:38:23.092739  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:23.092745  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:23.092802  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:23.133640  291455 cri.go:89] found id: ""
	I1212 01:38:23.133668  291455 logs.go:282] 0 containers: []
	W1212 01:38:23.133676  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:23.133686  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:23.133697  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:23.196413  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:23.196452  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:23.209608  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:23.209634  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:23.275524  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:23.267738    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:23.268351    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:23.269907    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:23.270261    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:23.271739    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:23.267738    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:23.268351    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:23.269907    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:23.270261    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:23.271739    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:23.275547  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:23.275559  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:23.300618  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:23.300651  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:25.829093  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:25.839308  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:25.839392  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:25.862901  291455 cri.go:89] found id: ""
	I1212 01:38:25.862927  291455 logs.go:282] 0 containers: []
	W1212 01:38:25.862936  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:25.862942  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:25.863050  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:25.886878  291455 cri.go:89] found id: ""
	I1212 01:38:25.886912  291455 logs.go:282] 0 containers: []
	W1212 01:38:25.886921  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:25.886927  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:25.887012  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:25.912760  291455 cri.go:89] found id: ""
	I1212 01:38:25.912782  291455 logs.go:282] 0 containers: []
	W1212 01:38:25.912791  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:25.912799  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:25.912867  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:25.937385  291455 cri.go:89] found id: ""
	I1212 01:38:25.937409  291455 logs.go:282] 0 containers: []
	W1212 01:38:25.937418  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:25.937424  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:25.937482  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:25.961635  291455 cri.go:89] found id: ""
	I1212 01:38:25.961659  291455 logs.go:282] 0 containers: []
	W1212 01:38:25.961668  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:25.961674  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:25.961736  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:25.984780  291455 cri.go:89] found id: ""
	I1212 01:38:25.984804  291455 logs.go:282] 0 containers: []
	W1212 01:38:25.984814  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:25.984821  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:25.984886  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:26.013891  291455 cri.go:89] found id: ""
	I1212 01:38:26.013918  291455 logs.go:282] 0 containers: []
	W1212 01:38:26.013927  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:26.013933  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:26.013995  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:26.058178  291455 cri.go:89] found id: ""
	I1212 01:38:26.058203  291455 logs.go:282] 0 containers: []
	W1212 01:38:26.058212  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:26.058222  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:26.058233  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:26.145226  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:26.145265  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:26.159401  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:26.159430  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:26.224696  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:26.216061    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:26.217085    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:26.217937    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:26.219401    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:26.219913    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:26.216061    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:26.217085    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:26.217937    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:26.219401    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:26.219913    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:26.224716  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:26.224727  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:26.249818  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:26.249853  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:38:25.536763  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:28.036701  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:30.036797  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:38:28.780686  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:28.791844  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:28.791927  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:28.820089  291455 cri.go:89] found id: ""
	I1212 01:38:28.820114  291455 logs.go:282] 0 containers: []
	W1212 01:38:28.820123  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:28.820129  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:28.820187  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:28.844073  291455 cri.go:89] found id: ""
	I1212 01:38:28.844097  291455 logs.go:282] 0 containers: []
	W1212 01:38:28.844106  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:28.844115  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:28.844173  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:28.874510  291455 cri.go:89] found id: ""
	I1212 01:38:28.874535  291455 logs.go:282] 0 containers: []
	W1212 01:38:28.874544  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:28.874550  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:28.874609  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:28.899593  291455 cri.go:89] found id: ""
	I1212 01:38:28.899667  291455 logs.go:282] 0 containers: []
	W1212 01:38:28.899683  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:28.899691  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:28.899749  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:28.923958  291455 cri.go:89] found id: ""
	I1212 01:38:28.923981  291455 logs.go:282] 0 containers: []
	W1212 01:38:28.923990  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:28.923996  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:28.924058  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:28.949188  291455 cri.go:89] found id: ""
	I1212 01:38:28.949217  291455 logs.go:282] 0 containers: []
	W1212 01:38:28.949225  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:28.949231  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:28.949307  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:28.974943  291455 cri.go:89] found id: ""
	I1212 01:38:28.974968  291455 logs.go:282] 0 containers: []
	W1212 01:38:28.974976  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:28.974982  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:28.975062  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:29.004380  291455 cri.go:89] found id: ""
	I1212 01:38:29.004475  291455 logs.go:282] 0 containers: []
	W1212 01:38:29.004501  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:29.004542  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:29.004572  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:29.021785  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:29.021856  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:29.143333  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:29.134378    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:29.134910    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:29.137306    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:29.137843    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:29.139511    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:29.134378    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:29.134910    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:29.137306    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:29.137843    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:29.139511    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:29.143354  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:29.143366  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:29.168668  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:29.168699  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:29.197133  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:29.197159  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1212 01:38:32.536552  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:35.039253  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:38:31.753888  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:31.765059  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:31.765150  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:31.790319  291455 cri.go:89] found id: ""
	I1212 01:38:31.790342  291455 logs.go:282] 0 containers: []
	W1212 01:38:31.790350  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:31.790357  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:31.790415  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:31.815400  291455 cri.go:89] found id: ""
	I1212 01:38:31.815424  291455 logs.go:282] 0 containers: []
	W1212 01:38:31.815434  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:31.815441  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:31.815502  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:31.840194  291455 cri.go:89] found id: ""
	I1212 01:38:31.840217  291455 logs.go:282] 0 containers: []
	W1212 01:38:31.840226  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:31.840231  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:31.840291  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:31.867911  291455 cri.go:89] found id: ""
	I1212 01:38:31.867935  291455 logs.go:282] 0 containers: []
	W1212 01:38:31.867943  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:31.867949  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:31.868008  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:31.892198  291455 cri.go:89] found id: ""
	I1212 01:38:31.892222  291455 logs.go:282] 0 containers: []
	W1212 01:38:31.892230  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:31.892238  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:31.892296  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:31.916890  291455 cri.go:89] found id: ""
	I1212 01:38:31.916914  291455 logs.go:282] 0 containers: []
	W1212 01:38:31.916923  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:31.916929  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:31.916988  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:31.942060  291455 cri.go:89] found id: ""
	I1212 01:38:31.942085  291455 logs.go:282] 0 containers: []
	W1212 01:38:31.942095  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:31.942102  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:31.942160  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:31.968817  291455 cri.go:89] found id: ""
	I1212 01:38:31.968839  291455 logs.go:282] 0 containers: []
	W1212 01:38:31.968848  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:31.968857  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:31.968871  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:31.997201  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:31.997227  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:32.062907  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:32.062945  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:32.079848  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:32.079874  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:32.172399  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:32.162924    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:32.163521    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:32.165105    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:32.165573    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:32.167197    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:32.162924    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:32.163521    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:32.165105    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:32.165573    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:32.167197    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:32.172421  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:32.172433  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:34.699204  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:34.710589  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:34.710660  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:34.734740  291455 cri.go:89] found id: ""
	I1212 01:38:34.734767  291455 logs.go:282] 0 containers: []
	W1212 01:38:34.734776  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:34.734782  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:34.734841  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:34.759636  291455 cri.go:89] found id: ""
	I1212 01:38:34.759659  291455 logs.go:282] 0 containers: []
	W1212 01:38:34.759667  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:34.759679  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:34.759739  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:34.785220  291455 cri.go:89] found id: ""
	I1212 01:38:34.785255  291455 logs.go:282] 0 containers: []
	W1212 01:38:34.785265  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:34.785271  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:34.785341  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:34.814480  291455 cri.go:89] found id: ""
	I1212 01:38:34.814502  291455 logs.go:282] 0 containers: []
	W1212 01:38:34.814510  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:34.814516  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:34.814580  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:34.840740  291455 cri.go:89] found id: ""
	I1212 01:38:34.840774  291455 logs.go:282] 0 containers: []
	W1212 01:38:34.840784  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:34.840790  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:34.840872  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:34.868875  291455 cri.go:89] found id: ""
	I1212 01:38:34.868898  291455 logs.go:282] 0 containers: []
	W1212 01:38:34.868907  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:34.868913  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:34.868973  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:34.897841  291455 cri.go:89] found id: ""
	I1212 01:38:34.897864  291455 logs.go:282] 0 containers: []
	W1212 01:38:34.897873  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:34.897879  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:34.897937  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:34.921846  291455 cri.go:89] found id: ""
	I1212 01:38:34.921869  291455 logs.go:282] 0 containers: []
	W1212 01:38:34.921877  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:34.921886  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:34.921897  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:34.935038  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:34.935066  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:35.007684  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:34.997327    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:34.997746    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:34.999039    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:34.999714    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:35.001615    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:34.997327    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:34.997746    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:34.999039    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:34.999714    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:35.001615    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:35.007755  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:35.007775  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:35.034750  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:35.034794  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:35.089747  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:35.089777  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1212 01:38:37.536673  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:39.543660  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:38:37.657148  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:37.668842  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:37.668917  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:37.696665  291455 cri.go:89] found id: ""
	I1212 01:38:37.696699  291455 logs.go:282] 0 containers: []
	W1212 01:38:37.696708  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:37.696720  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:37.696777  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:37.728956  291455 cri.go:89] found id: ""
	I1212 01:38:37.728979  291455 logs.go:282] 0 containers: []
	W1212 01:38:37.728987  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:37.728993  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:37.729058  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:37.753296  291455 cri.go:89] found id: ""
	I1212 01:38:37.753324  291455 logs.go:282] 0 containers: []
	W1212 01:38:37.753334  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:37.753340  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:37.753397  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:37.778445  291455 cri.go:89] found id: ""
	I1212 01:38:37.778471  291455 logs.go:282] 0 containers: []
	W1212 01:38:37.778481  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:37.778490  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:37.778548  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:37.807550  291455 cri.go:89] found id: ""
	I1212 01:38:37.807572  291455 logs.go:282] 0 containers: []
	W1212 01:38:37.807580  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:37.807587  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:37.807649  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:37.832292  291455 cri.go:89] found id: ""
	I1212 01:38:37.832315  291455 logs.go:282] 0 containers: []
	W1212 01:38:37.832323  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:37.832329  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:37.832386  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:37.856566  291455 cri.go:89] found id: ""
	I1212 01:38:37.856588  291455 logs.go:282] 0 containers: []
	W1212 01:38:37.856597  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:37.856602  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:37.856660  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:37.880677  291455 cri.go:89] found id: ""
	I1212 01:38:37.880741  291455 logs.go:282] 0 containers: []
	W1212 01:38:37.880766  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:37.880789  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:37.880820  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:37.910870  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:37.910908  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:37.938485  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:37.938520  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:37.993961  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:37.993995  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:38.010371  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:38.010404  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:38.096529  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:38.085475    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:38.086344    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:38.088104    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:38.088451    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:38.092325    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:38.085475    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:38.086344    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:38.088104    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:38.088451    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:38.092325    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:40.598418  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:40.609775  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:40.609847  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:40.635651  291455 cri.go:89] found id: ""
	I1212 01:38:40.635677  291455 logs.go:282] 0 containers: []
	W1212 01:38:40.635686  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:40.635693  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:40.635757  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:40.660863  291455 cri.go:89] found id: ""
	I1212 01:38:40.660889  291455 logs.go:282] 0 containers: []
	W1212 01:38:40.660898  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:40.660905  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:40.660966  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:40.685941  291455 cri.go:89] found id: ""
	I1212 01:38:40.686012  291455 logs.go:282] 0 containers: []
	W1212 01:38:40.686053  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:40.686078  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:40.686166  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:40.711525  291455 cri.go:89] found id: ""
	I1212 01:38:40.711554  291455 logs.go:282] 0 containers: []
	W1212 01:38:40.711563  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:40.711569  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:40.711630  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:40.737721  291455 cri.go:89] found id: ""
	I1212 01:38:40.737795  291455 logs.go:282] 0 containers: []
	W1212 01:38:40.737816  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:40.737836  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:40.737927  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:40.761337  291455 cri.go:89] found id: ""
	I1212 01:38:40.761402  291455 logs.go:282] 0 containers: []
	W1212 01:38:40.761424  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:40.761442  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:40.761525  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:40.786163  291455 cri.go:89] found id: ""
	I1212 01:38:40.786239  291455 logs.go:282] 0 containers: []
	W1212 01:38:40.786264  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:40.786285  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:40.786412  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:40.810546  291455 cri.go:89] found id: ""
	I1212 01:38:40.810610  291455 logs.go:282] 0 containers: []
	W1212 01:38:40.810634  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:40.810655  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:40.810694  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:40.866283  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:40.866320  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:40.879799  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:40.879834  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:40.945902  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:40.937611    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:40.938411    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:40.939975    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:40.940544    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:40.942091    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:40.937611    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:40.938411    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:40.939975    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:40.940544    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:40.942091    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:40.945925  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:40.945938  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:40.971267  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:40.971302  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:38:42.036561  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:44.536569  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:38:43.502022  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:43.513782  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:43.513855  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:43.538026  291455 cri.go:89] found id: ""
	I1212 01:38:43.538047  291455 logs.go:282] 0 containers: []
	W1212 01:38:43.538055  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:43.538060  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:43.538117  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:43.562296  291455 cri.go:89] found id: ""
	I1212 01:38:43.562320  291455 logs.go:282] 0 containers: []
	W1212 01:38:43.562329  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:43.562335  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:43.562399  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:43.585964  291455 cri.go:89] found id: ""
	I1212 01:38:43.585986  291455 logs.go:282] 0 containers: []
	W1212 01:38:43.585995  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:43.586001  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:43.586056  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:43.609636  291455 cri.go:89] found id: ""
	I1212 01:38:43.609658  291455 logs.go:282] 0 containers: []
	W1212 01:38:43.609666  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:43.609672  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:43.609729  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:43.634822  291455 cri.go:89] found id: ""
	I1212 01:38:43.634843  291455 logs.go:282] 0 containers: []
	W1212 01:38:43.634852  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:43.634857  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:43.634916  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:43.659517  291455 cri.go:89] found id: ""
	I1212 01:38:43.659539  291455 logs.go:282] 0 containers: []
	W1212 01:38:43.659553  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:43.659560  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:43.659619  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:43.684416  291455 cri.go:89] found id: ""
	I1212 01:38:43.684471  291455 logs.go:282] 0 containers: []
	W1212 01:38:43.684486  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:43.684493  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:43.684557  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:43.708909  291455 cri.go:89] found id: ""
	I1212 01:38:43.708931  291455 logs.go:282] 0 containers: []
	W1212 01:38:43.708939  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:43.708949  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:43.708961  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:43.764034  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:43.764069  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:43.778276  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:43.778304  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:43.849112  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:43.839330    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:43.839703    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:43.842808    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:43.843485    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:43.845319    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:43.839330    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:43.839703    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:43.842808    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:43.843485    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:43.845319    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:43.849132  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:43.849144  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:43.874790  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:43.874823  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:38:47.036537  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:49.536417  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:38:46.404666  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:46.415686  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:46.415772  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:46.446409  291455 cri.go:89] found id: ""
	I1212 01:38:46.446436  291455 logs.go:282] 0 containers: []
	W1212 01:38:46.446445  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:46.446452  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:46.446517  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:46.481137  291455 cri.go:89] found id: ""
	I1212 01:38:46.481160  291455 logs.go:282] 0 containers: []
	W1212 01:38:46.481169  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:46.481175  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:46.481258  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:46.506866  291455 cri.go:89] found id: ""
	I1212 01:38:46.506892  291455 logs.go:282] 0 containers: []
	W1212 01:38:46.506902  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:46.506908  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:46.506964  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:46.535109  291455 cri.go:89] found id: ""
	I1212 01:38:46.535185  291455 logs.go:282] 0 containers: []
	W1212 01:38:46.535208  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:46.535228  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:46.535312  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:46.559379  291455 cri.go:89] found id: ""
	I1212 01:38:46.559402  291455 logs.go:282] 0 containers: []
	W1212 01:38:46.559410  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:46.559417  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:46.559478  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:46.583642  291455 cri.go:89] found id: ""
	I1212 01:38:46.583717  291455 logs.go:282] 0 containers: []
	W1212 01:38:46.583738  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:46.583758  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:46.583842  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:46.608474  291455 cri.go:89] found id: ""
	I1212 01:38:46.608541  291455 logs.go:282] 0 containers: []
	W1212 01:38:46.608563  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:46.608578  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:46.608652  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:46.632905  291455 cri.go:89] found id: ""
	I1212 01:38:46.632982  291455 logs.go:282] 0 containers: []
	W1212 01:38:46.632997  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:46.633007  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:46.633018  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:46.689011  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:46.689048  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:46.702565  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:46.702592  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:46.772610  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:46.763145    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:46.764149    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:46.764820    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:46.766385    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:46.766678    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:46.763145    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:46.764149    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:46.764820    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:46.766385    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:46.766678    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:46.772629  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:46.772643  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:46.797690  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:46.797725  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:49.328051  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:49.341287  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:49.341360  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:49.378113  291455 cri.go:89] found id: ""
	I1212 01:38:49.378135  291455 logs.go:282] 0 containers: []
	W1212 01:38:49.378143  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:49.378149  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:49.378210  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:49.404269  291455 cri.go:89] found id: ""
	I1212 01:38:49.404291  291455 logs.go:282] 0 containers: []
	W1212 01:38:49.404300  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:49.404306  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:49.404364  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:49.428783  291455 cri.go:89] found id: ""
	I1212 01:38:49.428809  291455 logs.go:282] 0 containers: []
	W1212 01:38:49.428819  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:49.428825  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:49.428884  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:49.453856  291455 cri.go:89] found id: ""
	I1212 01:38:49.453889  291455 logs.go:282] 0 containers: []
	W1212 01:38:49.453898  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:49.453905  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:49.453965  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:49.480403  291455 cri.go:89] found id: ""
	I1212 01:38:49.480428  291455 logs.go:282] 0 containers: []
	W1212 01:38:49.480439  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:49.480445  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:49.480502  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:49.505527  291455 cri.go:89] found id: ""
	I1212 01:38:49.505594  291455 logs.go:282] 0 containers: []
	W1212 01:38:49.505617  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:49.505644  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:49.505740  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:49.529450  291455 cri.go:89] found id: ""
	I1212 01:38:49.529474  291455 logs.go:282] 0 containers: []
	W1212 01:38:49.529483  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:49.529489  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:49.529546  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:49.554349  291455 cri.go:89] found id: ""
	I1212 01:38:49.554412  291455 logs.go:282] 0 containers: []
	W1212 01:38:49.554435  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:49.554465  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:49.554493  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:49.611773  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:49.611805  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:49.625145  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:49.625169  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:49.689186  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:49.680639    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:49.681463    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:49.683157    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:49.683640    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:49.685287    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:49.680639    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:49.681463    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:49.683157    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:49.683640    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:49.685287    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:49.689208  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:49.689220  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:49.715241  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:49.715275  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:38:51.536523  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:53.536619  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:38:52.245578  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:52.255964  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:52.256032  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:52.288234  291455 cri.go:89] found id: ""
	I1212 01:38:52.288273  291455 logs.go:282] 0 containers: []
	W1212 01:38:52.288281  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:52.288287  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:52.288362  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:52.361726  291455 cri.go:89] found id: ""
	I1212 01:38:52.361756  291455 logs.go:282] 0 containers: []
	W1212 01:38:52.361765  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:52.361772  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:52.361848  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:52.390222  291455 cri.go:89] found id: ""
	I1212 01:38:52.390248  291455 logs.go:282] 0 containers: []
	W1212 01:38:52.390257  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:52.390262  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:52.390320  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:52.415677  291455 cri.go:89] found id: ""
	I1212 01:38:52.415712  291455 logs.go:282] 0 containers: []
	W1212 01:38:52.415721  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:52.415728  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:52.415796  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:52.440412  291455 cri.go:89] found id: ""
	I1212 01:38:52.440435  291455 logs.go:282] 0 containers: []
	W1212 01:38:52.440444  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:52.440450  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:52.440508  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:52.464172  291455 cri.go:89] found id: ""
	I1212 01:38:52.464203  291455 logs.go:282] 0 containers: []
	W1212 01:38:52.464212  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:52.464219  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:52.464278  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:52.496050  291455 cri.go:89] found id: ""
	I1212 01:38:52.496075  291455 logs.go:282] 0 containers: []
	W1212 01:38:52.496083  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:52.496089  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:52.496147  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:52.525249  291455 cri.go:89] found id: ""
	I1212 01:38:52.525271  291455 logs.go:282] 0 containers: []
	W1212 01:38:52.525279  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:52.525288  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:52.525299  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:52.580198  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:52.580233  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:52.593582  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:52.593648  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:52.659167  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:52.650803    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:52.651520    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:52.653182    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:52.653702    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:52.655438    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:52.650803    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:52.651520    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:52.653182    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:52.653702    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:52.655438    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:52.659187  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:52.659199  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:52.685268  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:52.685300  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:55.219025  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:55.229148  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:55.229222  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:55.252977  291455 cri.go:89] found id: ""
	I1212 01:38:55.253051  291455 logs.go:282] 0 containers: []
	W1212 01:38:55.253066  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:55.253077  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:55.253140  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:55.276881  291455 cri.go:89] found id: ""
	I1212 01:38:55.276945  291455 logs.go:282] 0 containers: []
	W1212 01:38:55.276959  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:55.276966  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:55.277024  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:55.316321  291455 cri.go:89] found id: ""
	I1212 01:38:55.316355  291455 logs.go:282] 0 containers: []
	W1212 01:38:55.316364  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:55.316370  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:55.316447  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:55.355675  291455 cri.go:89] found id: ""
	I1212 01:38:55.355703  291455 logs.go:282] 0 containers: []
	W1212 01:38:55.355711  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:55.355717  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:55.355791  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:55.394580  291455 cri.go:89] found id: ""
	I1212 01:38:55.394607  291455 logs.go:282] 0 containers: []
	W1212 01:38:55.394615  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:55.394621  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:55.394693  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:55.423340  291455 cri.go:89] found id: ""
	I1212 01:38:55.423363  291455 logs.go:282] 0 containers: []
	W1212 01:38:55.423371  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:55.423378  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:55.423436  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:55.447512  291455 cri.go:89] found id: ""
	I1212 01:38:55.447536  291455 logs.go:282] 0 containers: []
	W1212 01:38:55.447544  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:55.447550  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:55.447610  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:55.470830  291455 cri.go:89] found id: ""
	I1212 01:38:55.470853  291455 logs.go:282] 0 containers: []
	W1212 01:38:55.470867  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:55.470876  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:55.470886  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:55.528525  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:55.528561  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:55.541815  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:55.541843  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:55.605253  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:55.596889    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:55.597592    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:55.599233    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:55.599799    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:55.601358    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:55.596889    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:55.597592    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:55.599233    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:55.599799    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:55.601358    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:55.605280  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:55.605292  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:55.631237  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:55.631267  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:38:55.536688  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:58.036700  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:38:58.158753  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:58.169462  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:58.169546  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:58.194075  291455 cri.go:89] found id: ""
	I1212 01:38:58.194096  291455 logs.go:282] 0 containers: []
	W1212 01:38:58.194105  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:58.194111  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:58.194171  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:58.218468  291455 cri.go:89] found id: ""
	I1212 01:38:58.218546  291455 logs.go:282] 0 containers: []
	W1212 01:38:58.218569  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:58.218590  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:58.218675  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:58.242950  291455 cri.go:89] found id: ""
	I1212 01:38:58.242973  291455 logs.go:282] 0 containers: []
	W1212 01:38:58.242981  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:58.242987  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:58.243142  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:58.269403  291455 cri.go:89] found id: ""
	I1212 01:38:58.269423  291455 logs.go:282] 0 containers: []
	W1212 01:38:58.269432  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:58.269439  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:58.269502  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:58.317022  291455 cri.go:89] found id: ""
	I1212 01:38:58.317044  291455 logs.go:282] 0 containers: []
	W1212 01:38:58.317054  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:58.317059  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:58.317117  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:58.373414  291455 cri.go:89] found id: ""
	I1212 01:38:58.373486  291455 logs.go:282] 0 containers: []
	W1212 01:38:58.373511  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:58.373531  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:58.373619  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:58.404516  291455 cri.go:89] found id: ""
	I1212 01:38:58.404583  291455 logs.go:282] 0 containers: []
	W1212 01:38:58.404597  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:58.404604  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:58.404663  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:58.433096  291455 cri.go:89] found id: ""
	I1212 01:38:58.433120  291455 logs.go:282] 0 containers: []
	W1212 01:38:58.433131  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:58.433141  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:58.433170  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:58.495200  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:58.486845    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:58.487734    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:58.489310    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:58.489623    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:58.491296    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:58.486845    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:58.487734    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:58.489310    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:58.489623    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:58.491296    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:58.495223  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:58.495237  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:58.520595  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:58.520626  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:58.547636  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:58.547664  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:58.603945  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:58.603979  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:01.119071  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:01.130124  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:01.130196  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:01.155700  291455 cri.go:89] found id: ""
	I1212 01:39:01.155725  291455 logs.go:282] 0 containers: []
	W1212 01:39:01.155733  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:01.155740  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:01.155799  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:01.183985  291455 cri.go:89] found id: ""
	I1212 01:39:01.184012  291455 logs.go:282] 0 containers: []
	W1212 01:39:01.184021  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:01.184028  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:01.184095  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:01.211713  291455 cri.go:89] found id: ""
	I1212 01:39:01.211740  291455 logs.go:282] 0 containers: []
	W1212 01:39:01.211749  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:01.211756  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:01.211817  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:01.238159  291455 cri.go:89] found id: ""
	I1212 01:39:01.238185  291455 logs.go:282] 0 containers: []
	W1212 01:39:01.238195  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:01.238201  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:01.238265  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:01.264520  291455 cri.go:89] found id: ""
	I1212 01:39:01.264544  291455 logs.go:282] 0 containers: []
	W1212 01:39:01.264553  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:01.264560  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:01.264618  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:01.320162  291455 cri.go:89] found id: ""
	I1212 01:39:01.320191  291455 logs.go:282] 0 containers: []
	W1212 01:39:01.320200  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:01.320207  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:01.320276  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	W1212 01:39:00.536335  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:39:02.536671  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:39:05.036449  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:39:01.367993  291455 cri.go:89] found id: ""
	I1212 01:39:01.368020  291455 logs.go:282] 0 containers: []
	W1212 01:39:01.368029  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:01.368037  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:01.368107  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:01.395205  291455 cri.go:89] found id: ""
	I1212 01:39:01.395230  291455 logs.go:282] 0 containers: []
	W1212 01:39:01.395239  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:01.395248  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:01.395260  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:01.450970  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:01.451049  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:01.464511  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:01.464540  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:01.529452  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:01.521771    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:01.522386    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:01.523907    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:01.524217    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:01.525703    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:01.521771    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:01.522386    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:01.523907    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:01.524217    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:01.525703    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:01.529472  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:01.529484  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:01.553702  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:01.553734  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:04.082286  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:04.093237  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:04.093313  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:04.118261  291455 cri.go:89] found id: ""
	I1212 01:39:04.118283  291455 logs.go:282] 0 containers: []
	W1212 01:39:04.118292  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:04.118298  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:04.118360  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:04.147714  291455 cri.go:89] found id: ""
	I1212 01:39:04.147736  291455 logs.go:282] 0 containers: []
	W1212 01:39:04.147745  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:04.147751  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:04.147815  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:04.172999  291455 cri.go:89] found id: ""
	I1212 01:39:04.173023  291455 logs.go:282] 0 containers: []
	W1212 01:39:04.173032  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:04.173039  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:04.173101  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:04.197081  291455 cri.go:89] found id: ""
	I1212 01:39:04.197103  291455 logs.go:282] 0 containers: []
	W1212 01:39:04.197111  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:04.197119  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:04.197176  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:04.220639  291455 cri.go:89] found id: ""
	I1212 01:39:04.220665  291455 logs.go:282] 0 containers: []
	W1212 01:39:04.220674  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:04.220681  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:04.220746  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:04.248901  291455 cri.go:89] found id: ""
	I1212 01:39:04.248926  291455 logs.go:282] 0 containers: []
	W1212 01:39:04.248935  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:04.248944  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:04.249011  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:04.274064  291455 cri.go:89] found id: ""
	I1212 01:39:04.274085  291455 logs.go:282] 0 containers: []
	W1212 01:39:04.274093  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:04.274099  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:04.274161  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:04.332510  291455 cri.go:89] found id: ""
	I1212 01:39:04.332535  291455 logs.go:282] 0 containers: []
	W1212 01:39:04.332545  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:04.332555  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:04.332572  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:04.368151  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:04.368189  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:04.403091  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:04.403118  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:04.459000  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:04.459031  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:04.472281  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:04.472306  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:04.534979  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:04.526363    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:04.527054    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:04.528724    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:04.529233    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:04.530692    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:04.526363    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:04.527054    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:04.528724    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:04.529233    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:04.530692    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1212 01:39:07.036549  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:39:09.036731  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:39:07.035447  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:07.046244  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:07.046313  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:07.072737  291455 cri.go:89] found id: ""
	I1212 01:39:07.072761  291455 logs.go:282] 0 containers: []
	W1212 01:39:07.072770  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:07.072776  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:07.072835  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:07.097400  291455 cri.go:89] found id: ""
	I1212 01:39:07.097423  291455 logs.go:282] 0 containers: []
	W1212 01:39:07.097431  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:07.097438  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:07.097496  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:07.121464  291455 cri.go:89] found id: ""
	I1212 01:39:07.121486  291455 logs.go:282] 0 containers: []
	W1212 01:39:07.121495  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:07.121501  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:07.121584  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:07.145780  291455 cri.go:89] found id: ""
	I1212 01:39:07.145800  291455 logs.go:282] 0 containers: []
	W1212 01:39:07.145808  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:07.145814  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:07.145870  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:07.169997  291455 cri.go:89] found id: ""
	I1212 01:39:07.170018  291455 logs.go:282] 0 containers: []
	W1212 01:39:07.170027  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:07.170033  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:07.170091  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:07.195061  291455 cri.go:89] found id: ""
	I1212 01:39:07.195088  291455 logs.go:282] 0 containers: []
	W1212 01:39:07.195096  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:07.195103  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:07.195161  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:07.220294  291455 cri.go:89] found id: ""
	I1212 01:39:07.220317  291455 logs.go:282] 0 containers: []
	W1212 01:39:07.220325  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:07.220331  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:07.220389  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:07.245551  291455 cri.go:89] found id: ""
	I1212 01:39:07.245576  291455 logs.go:282] 0 containers: []
	W1212 01:39:07.245586  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:07.245595  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:07.245607  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:07.277493  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:07.277521  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:07.344946  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:07.347238  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:07.376690  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:07.376714  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:07.447695  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:07.438862    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:07.439591    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:07.441334    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:07.441943    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:07.443673    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:07.438862    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:07.439591    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:07.441334    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:07.441943    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:07.443673    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:07.447717  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:07.447730  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:09.974214  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:09.987839  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:09.987921  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:10.025371  291455 cri.go:89] found id: ""
	I1212 01:39:10.025397  291455 logs.go:282] 0 containers: []
	W1212 01:39:10.025407  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:10.025413  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:10.025477  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:10.051333  291455 cri.go:89] found id: ""
	I1212 01:39:10.051357  291455 logs.go:282] 0 containers: []
	W1212 01:39:10.051366  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:10.051371  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:10.051436  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:10.075263  291455 cri.go:89] found id: ""
	I1212 01:39:10.075289  291455 logs.go:282] 0 containers: []
	W1212 01:39:10.075298  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:10.075305  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:10.075364  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:10.103331  291455 cri.go:89] found id: ""
	I1212 01:39:10.103355  291455 logs.go:282] 0 containers: []
	W1212 01:39:10.103364  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:10.103370  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:10.103431  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:10.128706  291455 cri.go:89] found id: ""
	I1212 01:39:10.128730  291455 logs.go:282] 0 containers: []
	W1212 01:39:10.128739  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:10.128746  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:10.128802  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:10.154605  291455 cri.go:89] found id: ""
	I1212 01:39:10.154627  291455 logs.go:282] 0 containers: []
	W1212 01:39:10.154637  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:10.154644  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:10.154703  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:10.179767  291455 cri.go:89] found id: ""
	I1212 01:39:10.179791  291455 logs.go:282] 0 containers: []
	W1212 01:39:10.179800  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:10.179806  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:10.179864  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:10.208346  291455 cri.go:89] found id: ""
	I1212 01:39:10.208369  291455 logs.go:282] 0 containers: []
	W1212 01:39:10.208376  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:10.208386  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:10.208397  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:10.263848  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:10.263883  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:10.279969  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:10.279994  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:10.405176  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:10.396616    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:10.397197    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:10.398853    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:10.399595    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:10.401217    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:10.396616    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:10.397197    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:10.398853    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:10.399595    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:10.401217    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:10.405198  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:10.405210  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:10.431360  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:10.431398  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:39:11.536529  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:39:13.536580  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:39:12.959344  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:12.971541  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:12.971628  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:13.006786  291455 cri.go:89] found id: ""
	I1212 01:39:13.006815  291455 logs.go:282] 0 containers: []
	W1212 01:39:13.006824  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:13.006830  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:13.006903  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:13.032106  291455 cri.go:89] found id: ""
	I1212 01:39:13.032127  291455 logs.go:282] 0 containers: []
	W1212 01:39:13.032135  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:13.032141  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:13.032200  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:13.057432  291455 cri.go:89] found id: ""
	I1212 01:39:13.057454  291455 logs.go:282] 0 containers: []
	W1212 01:39:13.057463  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:13.057469  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:13.057529  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:13.082502  291455 cri.go:89] found id: ""
	I1212 01:39:13.082524  291455 logs.go:282] 0 containers: []
	W1212 01:39:13.082532  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:13.082538  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:13.082595  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:13.108199  291455 cri.go:89] found id: ""
	I1212 01:39:13.108272  291455 logs.go:282] 0 containers: []
	W1212 01:39:13.108295  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:13.108323  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:13.108433  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:13.134284  291455 cri.go:89] found id: ""
	I1212 01:39:13.134356  291455 logs.go:282] 0 containers: []
	W1212 01:39:13.134379  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:13.134398  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:13.134485  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:13.159517  291455 cri.go:89] found id: ""
	I1212 01:39:13.159541  291455 logs.go:282] 0 containers: []
	W1212 01:39:13.159550  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:13.159556  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:13.159614  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:13.183175  291455 cri.go:89] found id: ""
	I1212 01:39:13.183199  291455 logs.go:282] 0 containers: []
	W1212 01:39:13.183207  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:13.183216  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:13.183232  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:13.241174  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:13.241210  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:13.254849  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:13.254880  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:13.381552  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:13.373347    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:13.373888    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:13.375400    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:13.375820    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:13.377000    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:13.373347    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:13.373888    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:13.375400    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:13.375820    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:13.377000    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:13.381573  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:13.381586  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:13.406354  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:13.406385  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:15.933099  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:15.943596  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:15.943674  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:15.966960  291455 cri.go:89] found id: ""
	I1212 01:39:15.967014  291455 logs.go:282] 0 containers: []
	W1212 01:39:15.967023  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:15.967030  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:15.967090  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:15.996145  291455 cri.go:89] found id: ""
	I1212 01:39:15.996167  291455 logs.go:282] 0 containers: []
	W1212 01:39:15.996175  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:15.996182  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:15.996239  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:16.025152  291455 cri.go:89] found id: ""
	I1212 01:39:16.025175  291455 logs.go:282] 0 containers: []
	W1212 01:39:16.025183  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:16.025191  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:16.025248  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:16.050231  291455 cri.go:89] found id: ""
	I1212 01:39:16.050264  291455 logs.go:282] 0 containers: []
	W1212 01:39:16.050273  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:16.050279  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:16.050345  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:16.076929  291455 cri.go:89] found id: ""
	I1212 01:39:16.076958  291455 logs.go:282] 0 containers: []
	W1212 01:39:16.076967  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:16.076975  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:16.077054  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:16.102241  291455 cri.go:89] found id: ""
	I1212 01:39:16.102273  291455 logs.go:282] 0 containers: []
	W1212 01:39:16.102282  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:16.102304  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:16.102383  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:16.126239  291455 cri.go:89] found id: ""
	I1212 01:39:16.126302  291455 logs.go:282] 0 containers: []
	W1212 01:39:16.126324  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:16.126344  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:16.126417  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:16.151645  291455 cri.go:89] found id: ""
	I1212 01:39:16.151674  291455 logs.go:282] 0 containers: []
	W1212 01:39:16.151683  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:16.151692  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:16.151702  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:16.176852  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:16.176882  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:16.206720  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:16.206746  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:16.262653  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:16.262686  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:16.275603  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:16.275634  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:16.035987  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:39:18.036847  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:39:18.536213  287206 node_ready.go:38] duration metric: took 6m0.000908955s for node "no-preload-361053" to be "Ready" ...
	I1212 01:39:18.539274  287206 out.go:203] 
	W1212 01:39:18.542145  287206 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1212 01:39:18.542166  287206 out.go:285] * 
	W1212 01:39:18.544311  287206 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 01:39:18.547291  287206 out.go:203] 
	W1212 01:39:16.359325  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:16.351492    8667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:16.351974    8667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:16.353266    8667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:16.353666    8667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:16.355275    8667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:16.351492    8667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:16.351974    8667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:16.353266    8667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:16.353666    8667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:16.355275    8667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:18.859963  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:18.870960  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:18.871050  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:18.910477  291455 cri.go:89] found id: ""
	I1212 01:39:18.910504  291455 logs.go:282] 0 containers: []
	W1212 01:39:18.910513  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:18.910519  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:18.910580  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:18.935189  291455 cri.go:89] found id: ""
	I1212 01:39:18.935212  291455 logs.go:282] 0 containers: []
	W1212 01:39:18.935221  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:18.935226  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:18.935282  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:18.960848  291455 cri.go:89] found id: ""
	I1212 01:39:18.960874  291455 logs.go:282] 0 containers: []
	W1212 01:39:18.960883  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:18.960888  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:18.960945  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:18.999545  291455 cri.go:89] found id: ""
	I1212 01:39:18.999572  291455 logs.go:282] 0 containers: []
	W1212 01:39:18.999581  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:18.999594  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:18.999657  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:19.037306  291455 cri.go:89] found id: ""
	I1212 01:39:19.037333  291455 logs.go:282] 0 containers: []
	W1212 01:39:19.037341  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:19.037347  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:19.037405  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:19.076075  291455 cri.go:89] found id: ""
	I1212 01:39:19.076096  291455 logs.go:282] 0 containers: []
	W1212 01:39:19.076104  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:19.076114  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:19.076168  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:19.106494  291455 cri.go:89] found id: ""
	I1212 01:39:19.106515  291455 logs.go:282] 0 containers: []
	W1212 01:39:19.106524  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:19.106529  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:19.106586  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:19.133049  291455 cri.go:89] found id: ""
	I1212 01:39:19.133073  291455 logs.go:282] 0 containers: []
	W1212 01:39:19.133082  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:19.133090  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:19.133105  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:19.218096  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:19.208102    8760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:19.208898    8760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:19.210671    8760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:19.211009    8760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:19.214074    8760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:19.208102    8760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:19.208898    8760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:19.210671    8760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:19.211009    8760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:19.214074    8760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:19.218119  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:19.218140  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:19.246120  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:19.246155  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:19.279088  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:19.279116  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:19.436253  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:19.436340  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:21.952490  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:21.962606  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:21.962676  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:21.986826  291455 cri.go:89] found id: ""
	I1212 01:39:21.986851  291455 logs.go:282] 0 containers: []
	W1212 01:39:21.986859  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:21.986866  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:21.986923  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:22.014517  291455 cri.go:89] found id: ""
	I1212 01:39:22.014541  291455 logs.go:282] 0 containers: []
	W1212 01:39:22.014551  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:22.014557  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:22.014623  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:22.041526  291455 cri.go:89] found id: ""
	I1212 01:39:22.041552  291455 logs.go:282] 0 containers: []
	W1212 01:39:22.041561  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:22.041568  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:22.041633  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:22.067041  291455 cri.go:89] found id: ""
	I1212 01:39:22.067069  291455 logs.go:282] 0 containers: []
	W1212 01:39:22.067079  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:22.067086  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:22.067149  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:22.092937  291455 cri.go:89] found id: ""
	I1212 01:39:22.092973  291455 logs.go:282] 0 containers: []
	W1212 01:39:22.092982  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:22.092988  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:22.093059  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:22.122005  291455 cri.go:89] found id: ""
	I1212 01:39:22.122031  291455 logs.go:282] 0 containers: []
	W1212 01:39:22.122039  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:22.122045  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:22.122107  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:22.147474  291455 cri.go:89] found id: ""
	I1212 01:39:22.147500  291455 logs.go:282] 0 containers: []
	W1212 01:39:22.147508  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:22.147515  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:22.147577  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:22.177172  291455 cri.go:89] found id: ""
	I1212 01:39:22.177199  291455 logs.go:282] 0 containers: []
	W1212 01:39:22.177208  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:22.177219  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:22.177231  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:22.234049  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:22.234083  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:22.247594  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:22.247619  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:22.368443  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:22.359792    8879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:22.360617    8879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:22.362109    8879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:22.362602    8879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:22.364143    8879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:22.359792    8879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:22.360617    8879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:22.362109    8879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:22.362602    8879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:22.364143    8879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:22.368462  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:22.368485  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:22.393929  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:22.393963  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:24.924468  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:24.934611  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:24.934679  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:24.960488  291455 cri.go:89] found id: ""
	I1212 01:39:24.960510  291455 logs.go:282] 0 containers: []
	W1212 01:39:24.960519  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:24.960524  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:24.960580  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:24.985199  291455 cri.go:89] found id: ""
	I1212 01:39:24.985222  291455 logs.go:282] 0 containers: []
	W1212 01:39:24.985231  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:24.985238  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:24.985295  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:25.017557  291455 cri.go:89] found id: ""
	I1212 01:39:25.017583  291455 logs.go:282] 0 containers: []
	W1212 01:39:25.017594  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:25.017601  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:25.017673  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:25.043724  291455 cri.go:89] found id: ""
	I1212 01:39:25.043756  291455 logs.go:282] 0 containers: []
	W1212 01:39:25.043766  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:25.043773  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:25.043836  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:25.068913  291455 cri.go:89] found id: ""
	I1212 01:39:25.068941  291455 logs.go:282] 0 containers: []
	W1212 01:39:25.068951  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:25.068958  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:25.069021  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:25.094251  291455 cri.go:89] found id: ""
	I1212 01:39:25.094274  291455 logs.go:282] 0 containers: []
	W1212 01:39:25.094282  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:25.094288  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:25.094347  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:25.118452  291455 cri.go:89] found id: ""
	I1212 01:39:25.118530  291455 logs.go:282] 0 containers: []
	W1212 01:39:25.118554  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:25.118575  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:25.118691  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:25.143548  291455 cri.go:89] found id: ""
	I1212 01:39:25.143571  291455 logs.go:282] 0 containers: []
	W1212 01:39:25.143584  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:25.143594  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:25.143605  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:25.201626  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:25.201662  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:25.214871  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:25.214900  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:25.278860  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:25.271096    8993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:25.271605    8993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:25.273123    8993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:25.273537    8993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:25.275035    8993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:25.271096    8993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:25.271605    8993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:25.273123    8993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:25.273537    8993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:25.275035    8993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:25.278890  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:25.278903  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:25.313862  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:25.313902  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:27.877952  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:27.888461  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:27.888534  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:27.912285  291455 cri.go:89] found id: ""
	I1212 01:39:27.912308  291455 logs.go:282] 0 containers: []
	W1212 01:39:27.912317  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:27.912323  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:27.912382  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:27.936668  291455 cri.go:89] found id: ""
	I1212 01:39:27.936693  291455 logs.go:282] 0 containers: []
	W1212 01:39:27.936701  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:27.936707  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:27.936763  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:27.964911  291455 cri.go:89] found id: ""
	I1212 01:39:27.964936  291455 logs.go:282] 0 containers: []
	W1212 01:39:27.964945  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:27.964952  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:27.965011  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:27.988509  291455 cri.go:89] found id: ""
	I1212 01:39:27.988530  291455 logs.go:282] 0 containers: []
	W1212 01:39:27.988539  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:27.988545  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:27.988606  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:28.014439  291455 cri.go:89] found id: ""
	I1212 01:39:28.014461  291455 logs.go:282] 0 containers: []
	W1212 01:39:28.014469  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:28.014475  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:28.014542  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:28.040611  291455 cri.go:89] found id: ""
	I1212 01:39:28.040637  291455 logs.go:282] 0 containers: []
	W1212 01:39:28.040646  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:28.040652  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:28.040711  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:28.064823  291455 cri.go:89] found id: ""
	I1212 01:39:28.064844  291455 logs.go:282] 0 containers: []
	W1212 01:39:28.064852  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:28.064858  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:28.064922  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:28.089374  291455 cri.go:89] found id: ""
	I1212 01:39:28.089397  291455 logs.go:282] 0 containers: []
	W1212 01:39:28.089405  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:28.089414  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:28.089426  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:28.146024  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:28.146058  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:28.160130  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:28.160159  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:28.225838  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:28.217551    9107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:28.218334    9107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:28.219917    9107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:28.220532    9107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:28.222161    9107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:28.217551    9107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:28.218334    9107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:28.219917    9107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:28.220532    9107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:28.222161    9107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:28.225864  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:28.225878  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:28.250733  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:28.250768  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:30.798068  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:30.808169  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:30.808239  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:30.836768  291455 cri.go:89] found id: ""
	I1212 01:39:30.836789  291455 logs.go:282] 0 containers: []
	W1212 01:39:30.836798  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:30.836805  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:30.836863  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:30.860144  291455 cri.go:89] found id: ""
	I1212 01:39:30.860169  291455 logs.go:282] 0 containers: []
	W1212 01:39:30.860179  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:30.860185  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:30.860242  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:30.884081  291455 cri.go:89] found id: ""
	I1212 01:39:30.884107  291455 logs.go:282] 0 containers: []
	W1212 01:39:30.884116  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:30.884122  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:30.884180  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:30.908110  291455 cri.go:89] found id: ""
	I1212 01:39:30.908133  291455 logs.go:282] 0 containers: []
	W1212 01:39:30.908147  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:30.908153  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:30.908213  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:30.934406  291455 cri.go:89] found id: ""
	I1212 01:39:30.934428  291455 logs.go:282] 0 containers: []
	W1212 01:39:30.934436  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:30.934449  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:30.934507  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:30.962854  291455 cri.go:89] found id: ""
	I1212 01:39:30.962877  291455 logs.go:282] 0 containers: []
	W1212 01:39:30.962885  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:30.962891  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:30.962963  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:30.986340  291455 cri.go:89] found id: ""
	I1212 01:39:30.986366  291455 logs.go:282] 0 containers: []
	W1212 01:39:30.986375  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:30.986385  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:30.986447  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:31.021526  291455 cri.go:89] found id: ""
	I1212 01:39:31.021557  291455 logs.go:282] 0 containers: []
	W1212 01:39:31.021567  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:31.021576  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:31.021586  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:31.080147  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:31.080186  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:31.094865  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:31.094894  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:31.159994  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:31.150850    9220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:31.151532    9220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:31.153295    9220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:31.153811    9220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:31.156044    9220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:31.150850    9220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:31.151532    9220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:31.153295    9220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:31.153811    9220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:31.156044    9220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:31.160017  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:31.160030  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:31.187806  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:31.187844  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:33.721677  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:33.732122  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:33.732196  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:33.756604  291455 cri.go:89] found id: ""
	I1212 01:39:33.756627  291455 logs.go:282] 0 containers: []
	W1212 01:39:33.756636  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:33.756642  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:33.756703  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:33.782055  291455 cri.go:89] found id: ""
	I1212 01:39:33.782079  291455 logs.go:282] 0 containers: []
	W1212 01:39:33.782088  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:33.782094  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:33.782150  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:33.806217  291455 cri.go:89] found id: ""
	I1212 01:39:33.806242  291455 logs.go:282] 0 containers: []
	W1212 01:39:33.806250  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:33.806256  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:33.806313  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:33.829556  291455 cri.go:89] found id: ""
	I1212 01:39:33.829580  291455 logs.go:282] 0 containers: []
	W1212 01:39:33.829588  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:33.829595  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:33.829651  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:33.856222  291455 cri.go:89] found id: ""
	I1212 01:39:33.856251  291455 logs.go:282] 0 containers: []
	W1212 01:39:33.856259  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:33.856265  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:33.856323  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:33.886601  291455 cri.go:89] found id: ""
	I1212 01:39:33.886624  291455 logs.go:282] 0 containers: []
	W1212 01:39:33.886639  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:33.886646  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:33.886703  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:33.910597  291455 cri.go:89] found id: ""
	I1212 01:39:33.910621  291455 logs.go:282] 0 containers: []
	W1212 01:39:33.910630  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:33.910636  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:33.910701  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:33.934158  291455 cri.go:89] found id: ""
	I1212 01:39:33.934185  291455 logs.go:282] 0 containers: []
	W1212 01:39:33.934193  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:33.934202  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:33.934214  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:33.958501  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:33.958533  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:33.986448  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:33.986476  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:34.042064  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:34.042099  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:34.056951  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:34.056977  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:34.127667  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:34.120136    9345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:34.120757    9345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:34.121818    9345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:34.122189    9345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:34.123766    9345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:34.120136    9345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:34.120757    9345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:34.121818    9345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:34.122189    9345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:34.123766    9345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:36.628762  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:36.639233  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:36.639305  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:36.663673  291455 cri.go:89] found id: ""
	I1212 01:39:36.663698  291455 logs.go:282] 0 containers: []
	W1212 01:39:36.663706  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:36.663713  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:36.663793  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:36.688862  291455 cri.go:89] found id: ""
	I1212 01:39:36.688887  291455 logs.go:282] 0 containers: []
	W1212 01:39:36.688895  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:36.688901  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:36.688963  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:36.713353  291455 cri.go:89] found id: ""
	I1212 01:39:36.713377  291455 logs.go:282] 0 containers: []
	W1212 01:39:36.713386  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:36.713392  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:36.713451  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:36.740680  291455 cri.go:89] found id: ""
	I1212 01:39:36.740747  291455 logs.go:282] 0 containers: []
	W1212 01:39:36.740762  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:36.740769  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:36.740831  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:36.765582  291455 cri.go:89] found id: ""
	I1212 01:39:36.765657  291455 logs.go:282] 0 containers: []
	W1212 01:39:36.765679  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:36.765700  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:36.765788  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:36.790002  291455 cri.go:89] found id: ""
	I1212 01:39:36.790025  291455 logs.go:282] 0 containers: []
	W1212 01:39:36.790034  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:36.790040  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:36.790099  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:36.816694  291455 cri.go:89] found id: ""
	I1212 01:39:36.816717  291455 logs.go:282] 0 containers: []
	W1212 01:39:36.816728  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:36.816734  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:36.816793  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:36.845138  291455 cri.go:89] found id: ""
	I1212 01:39:36.845202  291455 logs.go:282] 0 containers: []
	W1212 01:39:36.845218  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:36.845229  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:36.845241  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:36.903016  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:36.903054  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:36.918866  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:36.918903  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:36.984787  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:36.976973    9447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:36.977356    9447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:36.978879    9447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:36.979250    9447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:36.980908    9447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:36.976973    9447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:36.977356    9447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:36.978879    9447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:36.979250    9447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:36.980908    9447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:36.984810  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:36.984821  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:37.009360  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:37.009399  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:39.548914  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:39.562684  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:39.562807  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:39.598338  291455 cri.go:89] found id: ""
	I1212 01:39:39.598363  291455 logs.go:282] 0 containers: []
	W1212 01:39:39.598372  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:39.598378  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:39.598435  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:39.636963  291455 cri.go:89] found id: ""
	I1212 01:39:39.636985  291455 logs.go:282] 0 containers: []
	W1212 01:39:39.636993  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:39.636999  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:39.637057  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:39.662905  291455 cri.go:89] found id: ""
	I1212 01:39:39.662928  291455 logs.go:282] 0 containers: []
	W1212 01:39:39.662936  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:39.662942  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:39.663047  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:39.686743  291455 cri.go:89] found id: ""
	I1212 01:39:39.686808  291455 logs.go:282] 0 containers: []
	W1212 01:39:39.686819  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:39.686826  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:39.686892  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:39.710381  291455 cri.go:89] found id: ""
	I1212 01:39:39.710452  291455 logs.go:282] 0 containers: []
	W1212 01:39:39.710476  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:39.710496  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:39.710581  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:39.734865  291455 cri.go:89] found id: ""
	I1212 01:39:39.734894  291455 logs.go:282] 0 containers: []
	W1212 01:39:39.734903  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:39.734910  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:39.735019  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:39.762787  291455 cri.go:89] found id: ""
	I1212 01:39:39.762813  291455 logs.go:282] 0 containers: []
	W1212 01:39:39.762822  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:39.762828  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:39.762940  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:39.788339  291455 cri.go:89] found id: ""
	I1212 01:39:39.788368  291455 logs.go:282] 0 containers: []
	W1212 01:39:39.788378  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:39.788388  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:39.788417  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:39.843014  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:39.843046  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:39.856565  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:39.856593  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:39.921611  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:39.913029    9558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:39.913865    9558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:39.915691    9558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:39.916129    9558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:39.917607    9558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:39.913029    9558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:39.913865    9558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:39.915691    9558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:39.916129    9558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:39.917607    9558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:39.921632  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:39.921644  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:39.948006  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:39.948039  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:42.479881  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:42.490524  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:42.490602  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:42.515563  291455 cri.go:89] found id: ""
	I1212 01:39:42.515641  291455 logs.go:282] 0 containers: []
	W1212 01:39:42.515656  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:42.515664  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:42.515725  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:42.560104  291455 cri.go:89] found id: ""
	I1212 01:39:42.560137  291455 logs.go:282] 0 containers: []
	W1212 01:39:42.560145  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:42.560152  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:42.560226  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:42.597091  291455 cri.go:89] found id: ""
	I1212 01:39:42.597131  291455 logs.go:282] 0 containers: []
	W1212 01:39:42.597140  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:42.597147  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:42.597219  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:42.629203  291455 cri.go:89] found id: ""
	I1212 01:39:42.629233  291455 logs.go:282] 0 containers: []
	W1212 01:39:42.629242  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:42.629248  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:42.629312  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:42.657935  291455 cri.go:89] found id: ""
	I1212 01:39:42.657959  291455 logs.go:282] 0 containers: []
	W1212 01:39:42.657968  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:42.657974  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:42.658039  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:42.684776  291455 cri.go:89] found id: ""
	I1212 01:39:42.684806  291455 logs.go:282] 0 containers: []
	W1212 01:39:42.684815  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:42.684822  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:42.684879  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:42.709384  291455 cri.go:89] found id: ""
	I1212 01:39:42.709419  291455 logs.go:282] 0 containers: []
	W1212 01:39:42.709429  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:42.709435  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:42.709505  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:42.733686  291455 cri.go:89] found id: ""
	I1212 01:39:42.733728  291455 logs.go:282] 0 containers: []
	W1212 01:39:42.733737  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:42.733747  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:42.733758  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:42.758552  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:42.758630  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:42.787823  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:42.787852  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:42.845099  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:42.845135  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:42.858856  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:42.858904  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:42.924089  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:42.915364    9684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:42.915778    9684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:42.917282    9684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:42.918788    9684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:42.920024    9684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:42.915364    9684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:42.915778    9684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:42.917282    9684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:42.918788    9684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:42.920024    9684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:45.424349  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:45.434772  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:45.434853  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:45.459272  291455 cri.go:89] found id: ""
	I1212 01:39:45.459297  291455 logs.go:282] 0 containers: []
	W1212 01:39:45.459306  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:45.459351  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:45.459482  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:45.485201  291455 cri.go:89] found id: ""
	I1212 01:39:45.485235  291455 logs.go:282] 0 containers: []
	W1212 01:39:45.485244  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:45.485266  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:45.485348  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:45.508997  291455 cri.go:89] found id: ""
	I1212 01:39:45.509022  291455 logs.go:282] 0 containers: []
	W1212 01:39:45.509031  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:45.509037  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:45.509094  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:45.533177  291455 cri.go:89] found id: ""
	I1212 01:39:45.533209  291455 logs.go:282] 0 containers: []
	W1212 01:39:45.533218  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:45.533224  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:45.533289  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:45.594513  291455 cri.go:89] found id: ""
	I1212 01:39:45.594538  291455 logs.go:282] 0 containers: []
	W1212 01:39:45.594546  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:45.594553  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:45.594617  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:45.627865  291455 cri.go:89] found id: ""
	I1212 01:39:45.627903  291455 logs.go:282] 0 containers: []
	W1212 01:39:45.627913  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:45.627919  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:45.627987  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:45.655026  291455 cri.go:89] found id: ""
	I1212 01:39:45.655049  291455 logs.go:282] 0 containers: []
	W1212 01:39:45.655058  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:45.655064  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:45.655127  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:45.680560  291455 cri.go:89] found id: ""
	I1212 01:39:45.680635  291455 logs.go:282] 0 containers: []
	W1212 01:39:45.680650  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:45.680660  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:45.680672  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:45.744860  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:45.736290    9776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:45.736804    9776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:45.738503    9776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:45.739012    9776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:45.740483    9776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:45.736290    9776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:45.736804    9776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:45.738503    9776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:45.739012    9776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:45.740483    9776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:45.744886  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:45.744908  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:45.770100  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:45.770135  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:45.797429  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:45.797455  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:45.853262  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:45.853296  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:48.367022  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:48.378069  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:48.378145  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:48.403921  291455 cri.go:89] found id: ""
	I1212 01:39:48.403943  291455 logs.go:282] 0 containers: []
	W1212 01:39:48.403952  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:48.403958  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:48.404016  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:48.428988  291455 cri.go:89] found id: ""
	I1212 01:39:48.429012  291455 logs.go:282] 0 containers: []
	W1212 01:39:48.429020  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:48.429027  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:48.429084  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:48.453105  291455 cri.go:89] found id: ""
	I1212 01:39:48.453128  291455 logs.go:282] 0 containers: []
	W1212 01:39:48.453137  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:48.453143  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:48.453201  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:48.477514  291455 cri.go:89] found id: ""
	I1212 01:39:48.477536  291455 logs.go:282] 0 containers: []
	W1212 01:39:48.477546  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:48.477551  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:48.477612  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:48.506708  291455 cri.go:89] found id: ""
	I1212 01:39:48.506730  291455 logs.go:282] 0 containers: []
	W1212 01:39:48.506738  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:48.506743  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:48.506801  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:48.546136  291455 cri.go:89] found id: ""
	I1212 01:39:48.546158  291455 logs.go:282] 0 containers: []
	W1212 01:39:48.546166  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:48.546172  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:48.546230  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:48.584757  291455 cri.go:89] found id: ""
	I1212 01:39:48.584778  291455 logs.go:282] 0 containers: []
	W1212 01:39:48.584787  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:48.584792  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:48.584860  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:48.624953  291455 cri.go:89] found id: ""
	I1212 01:39:48.624973  291455 logs.go:282] 0 containers: []
	W1212 01:39:48.624981  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:48.624989  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:48.625000  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:48.682582  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:48.682616  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:48.696819  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:48.696847  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:48.761964  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:48.752888    9897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:48.753667    9897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:48.755472    9897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:48.756105    9897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:48.757916    9897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:48.752888    9897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:48.753667    9897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:48.755472    9897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:48.756105    9897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:48.757916    9897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:48.761982  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:48.761994  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:48.787735  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:48.787766  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:51.315518  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:51.325805  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:51.325878  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:51.348771  291455 cri.go:89] found id: ""
	I1212 01:39:51.348797  291455 logs.go:282] 0 containers: []
	W1212 01:39:51.348806  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:51.348812  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:51.348892  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:51.372310  291455 cri.go:89] found id: ""
	I1212 01:39:51.372384  291455 logs.go:282] 0 containers: []
	W1212 01:39:51.372399  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:51.372406  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:51.372463  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:51.410822  291455 cri.go:89] found id: ""
	I1212 01:39:51.410855  291455 logs.go:282] 0 containers: []
	W1212 01:39:51.410865  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:51.410871  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:51.410935  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:51.434671  291455 cri.go:89] found id: ""
	I1212 01:39:51.434702  291455 logs.go:282] 0 containers: []
	W1212 01:39:51.434710  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:51.434716  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:51.434783  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:51.459981  291455 cri.go:89] found id: ""
	I1212 01:39:51.460054  291455 logs.go:282] 0 containers: []
	W1212 01:39:51.460070  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:51.460077  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:51.460134  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:51.484764  291455 cri.go:89] found id: ""
	I1212 01:39:51.484788  291455 logs.go:282] 0 containers: []
	W1212 01:39:51.484802  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:51.484808  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:51.484864  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:51.508943  291455 cri.go:89] found id: ""
	I1212 01:39:51.508966  291455 logs.go:282] 0 containers: []
	W1212 01:39:51.508974  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:51.508981  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:51.509040  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:51.553470  291455 cri.go:89] found id: ""
	I1212 01:39:51.553497  291455 logs.go:282] 0 containers: []
	W1212 01:39:51.553505  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:51.553514  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:51.553525  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:51.653146  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:51.645161   10003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:51.645898   10003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:51.647455   10003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:51.647736   10003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:51.649182   10003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:51.645161   10003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:51.645898   10003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:51.647455   10003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:51.647736   10003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:51.649182   10003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:51.653168  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:51.653179  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:51.679418  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:51.679450  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:51.709581  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:51.709607  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:51.764844  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:51.764878  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:54.280411  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:54.290776  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:54.290856  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:54.315212  291455 cri.go:89] found id: ""
	I1212 01:39:54.315236  291455 logs.go:282] 0 containers: []
	W1212 01:39:54.315246  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:54.315253  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:54.315311  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:54.339856  291455 cri.go:89] found id: ""
	I1212 01:39:54.339881  291455 logs.go:282] 0 containers: []
	W1212 01:39:54.339890  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:54.339896  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:54.339958  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:54.368679  291455 cri.go:89] found id: ""
	I1212 01:39:54.368702  291455 logs.go:282] 0 containers: []
	W1212 01:39:54.368711  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:54.368717  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:54.368776  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:54.393467  291455 cri.go:89] found id: ""
	I1212 01:39:54.393491  291455 logs.go:282] 0 containers: []
	W1212 01:39:54.393500  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:54.393507  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:54.393566  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:54.418691  291455 cri.go:89] found id: ""
	I1212 01:39:54.418713  291455 logs.go:282] 0 containers: []
	W1212 01:39:54.418722  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:54.418728  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:54.418785  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:54.444722  291455 cri.go:89] found id: ""
	I1212 01:39:54.444745  291455 logs.go:282] 0 containers: []
	W1212 01:39:54.444759  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:54.444766  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:54.444824  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:54.470007  291455 cri.go:89] found id: ""
	I1212 01:39:54.470029  291455 logs.go:282] 0 containers: []
	W1212 01:39:54.470037  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:54.470043  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:54.470104  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:54.494270  291455 cri.go:89] found id: ""
	I1212 01:39:54.494340  291455 logs.go:282] 0 containers: []
	W1212 01:39:54.494354  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:54.494364  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:54.494403  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:54.599318  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:54.577598   10110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:54.579503   10110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:54.592839   10110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:54.593570   10110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:54.595243   10110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:54.577598   10110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:54.579503   10110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:54.592839   10110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:54.593570   10110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:54.595243   10110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:54.599389  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:54.599417  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:54.630152  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:54.630190  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:54.658141  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:54.658167  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:54.713516  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:54.713551  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:57.227361  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:57.237887  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:57.237955  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:57.262202  291455 cri.go:89] found id: ""
	I1212 01:39:57.262227  291455 logs.go:282] 0 containers: []
	W1212 01:39:57.262236  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:57.262242  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:57.262299  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:57.287795  291455 cri.go:89] found id: ""
	I1212 01:39:57.287819  291455 logs.go:282] 0 containers: []
	W1212 01:39:57.287828  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:57.287834  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:57.287900  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:57.312347  291455 cri.go:89] found id: ""
	I1212 01:39:57.312372  291455 logs.go:282] 0 containers: []
	W1212 01:39:57.312381  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:57.312387  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:57.312448  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:57.340890  291455 cri.go:89] found id: ""
	I1212 01:39:57.340914  291455 logs.go:282] 0 containers: []
	W1212 01:39:57.340924  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:57.340930  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:57.340994  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:57.364578  291455 cri.go:89] found id: ""
	I1212 01:39:57.364643  291455 logs.go:282] 0 containers: []
	W1212 01:39:57.364658  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:57.364666  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:57.364735  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:57.389147  291455 cri.go:89] found id: ""
	I1212 01:39:57.389175  291455 logs.go:282] 0 containers: []
	W1212 01:39:57.389184  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:57.389191  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:57.389248  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:57.415275  291455 cri.go:89] found id: ""
	I1212 01:39:57.415300  291455 logs.go:282] 0 containers: []
	W1212 01:39:57.415315  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:57.415322  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:57.415385  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:57.440087  291455 cri.go:89] found id: ""
	I1212 01:39:57.440109  291455 logs.go:282] 0 containers: []
	W1212 01:39:57.440118  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:57.440127  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:57.440138  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:57.467124  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:57.467150  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:57.522232  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:57.522269  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:57.538082  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:57.538160  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:57.643552  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:57.631917   10245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:57.632540   10245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:57.634329   10245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:57.634855   10245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:57.636638   10245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:57.631917   10245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:57.632540   10245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:57.634329   10245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:57.634855   10245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:57.636638   10245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:57.643574  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:57.643586  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:00.169313  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:00.228741  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:00.228823  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:00.303835  291455 cri.go:89] found id: ""
	I1212 01:40:00.305186  291455 logs.go:282] 0 containers: []
	W1212 01:40:00.305353  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:00.309177  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:00.309371  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:00.363791  291455 cri.go:89] found id: ""
	I1212 01:40:00.363817  291455 logs.go:282] 0 containers: []
	W1212 01:40:00.363826  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:00.363832  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:00.363904  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:00.428687  291455 cri.go:89] found id: ""
	I1212 01:40:00.428710  291455 logs.go:282] 0 containers: []
	W1212 01:40:00.428720  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:00.428727  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:00.428821  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:00.471696  291455 cri.go:89] found id: ""
	I1212 01:40:00.471723  291455 logs.go:282] 0 containers: []
	W1212 01:40:00.471732  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:00.471740  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:00.471820  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:00.509321  291455 cri.go:89] found id: ""
	I1212 01:40:00.509347  291455 logs.go:282] 0 containers: []
	W1212 01:40:00.509372  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:00.509381  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:00.509460  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:00.593692  291455 cri.go:89] found id: ""
	I1212 01:40:00.593716  291455 logs.go:282] 0 containers: []
	W1212 01:40:00.593725  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:00.593732  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:00.593800  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:00.662781  291455 cri.go:89] found id: ""
	I1212 01:40:00.662804  291455 logs.go:282] 0 containers: []
	W1212 01:40:00.662813  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:00.662819  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:00.662912  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:00.689999  291455 cri.go:89] found id: ""
	I1212 01:40:00.690023  291455 logs.go:282] 0 containers: []
	W1212 01:40:00.690031  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:00.690041  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:00.690053  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:00.747296  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:00.747331  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:00.761427  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:00.761454  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:00.828444  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:00.819830   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:00.820596   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:00.822241   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:00.822754   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:00.824365   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:00.819830   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:00.820596   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:00.822241   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:00.822754   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:00.824365   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:00.828466  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:00.828479  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:00.855218  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:00.855254  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:03.387867  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:03.398566  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:03.398659  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:03.427352  291455 cri.go:89] found id: ""
	I1212 01:40:03.427376  291455 logs.go:282] 0 containers: []
	W1212 01:40:03.427385  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:03.427391  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:03.427456  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:03.451979  291455 cri.go:89] found id: ""
	I1212 01:40:03.452054  291455 logs.go:282] 0 containers: []
	W1212 01:40:03.452069  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:03.452076  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:03.452150  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:03.475705  291455 cri.go:89] found id: ""
	I1212 01:40:03.475729  291455 logs.go:282] 0 containers: []
	W1212 01:40:03.475739  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:03.475744  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:03.475831  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:03.500258  291455 cri.go:89] found id: ""
	I1212 01:40:03.500283  291455 logs.go:282] 0 containers: []
	W1212 01:40:03.500293  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:03.500300  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:03.500360  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:03.528939  291455 cri.go:89] found id: ""
	I1212 01:40:03.528962  291455 logs.go:282] 0 containers: []
	W1212 01:40:03.528971  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:03.528976  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:03.529037  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:03.557541  291455 cri.go:89] found id: ""
	I1212 01:40:03.557566  291455 logs.go:282] 0 containers: []
	W1212 01:40:03.557575  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:03.557581  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:03.557645  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:03.611801  291455 cri.go:89] found id: ""
	I1212 01:40:03.611827  291455 logs.go:282] 0 containers: []
	W1212 01:40:03.611837  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:03.611843  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:03.611906  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:03.641008  291455 cri.go:89] found id: ""
	I1212 01:40:03.641034  291455 logs.go:282] 0 containers: []
	W1212 01:40:03.641043  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:03.641053  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:03.641064  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:03.696830  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:03.696868  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:03.710227  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:03.710256  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:03.777119  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:03.769143   10461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:03.769540   10461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:03.771066   10461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:03.771655   10461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:03.773341   10461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:03.769143   10461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:03.769540   10461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:03.771066   10461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:03.771655   10461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:03.773341   10461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:03.777184  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:03.777203  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:03.802465  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:03.802497  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:06.331826  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:06.342482  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:06.342547  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:06.366505  291455 cri.go:89] found id: ""
	I1212 01:40:06.366527  291455 logs.go:282] 0 containers: []
	W1212 01:40:06.366536  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:06.366542  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:06.366599  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:06.391672  291455 cri.go:89] found id: ""
	I1212 01:40:06.391696  291455 logs.go:282] 0 containers: []
	W1212 01:40:06.391705  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:06.391711  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:06.391774  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:06.416914  291455 cri.go:89] found id: ""
	I1212 01:40:06.416941  291455 logs.go:282] 0 containers: []
	W1212 01:40:06.416950  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:06.416956  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:06.417031  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:06.441562  291455 cri.go:89] found id: ""
	I1212 01:40:06.441584  291455 logs.go:282] 0 containers: []
	W1212 01:40:06.441599  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:06.441606  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:06.441665  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:06.469918  291455 cri.go:89] found id: ""
	I1212 01:40:06.469942  291455 logs.go:282] 0 containers: []
	W1212 01:40:06.469951  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:06.469957  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:06.470014  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:06.494455  291455 cri.go:89] found id: ""
	I1212 01:40:06.494478  291455 logs.go:282] 0 containers: []
	W1212 01:40:06.494487  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:06.494503  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:06.494579  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:06.520013  291455 cri.go:89] found id: ""
	I1212 01:40:06.520037  291455 logs.go:282] 0 containers: []
	W1212 01:40:06.520046  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:06.520052  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:06.520108  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:06.571478  291455 cri.go:89] found id: ""
	I1212 01:40:06.571509  291455 logs.go:282] 0 containers: []
	W1212 01:40:06.571518  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:06.571528  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:06.571539  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:06.616555  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:06.616594  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:06.657561  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:06.657589  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:06.715328  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:06.715409  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:06.728591  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:06.728620  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:06.792104  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:06.783643   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:06.784436   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:06.785957   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:06.786254   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:06.787744   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:06.783643   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:06.784436   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:06.785957   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:06.786254   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:06.787744   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:09.292912  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:09.303462  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:09.303537  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:09.329031  291455 cri.go:89] found id: ""
	I1212 01:40:09.329057  291455 logs.go:282] 0 containers: []
	W1212 01:40:09.329066  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:09.329072  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:09.329188  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:09.353474  291455 cri.go:89] found id: ""
	I1212 01:40:09.353498  291455 logs.go:282] 0 containers: []
	W1212 01:40:09.353507  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:09.353513  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:09.353570  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:09.380805  291455 cri.go:89] found id: ""
	I1212 01:40:09.380830  291455 logs.go:282] 0 containers: []
	W1212 01:40:09.380839  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:09.380845  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:09.380959  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:09.408831  291455 cri.go:89] found id: ""
	I1212 01:40:09.408854  291455 logs.go:282] 0 containers: []
	W1212 01:40:09.408862  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:09.408868  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:09.408943  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:09.433352  291455 cri.go:89] found id: ""
	I1212 01:40:09.433374  291455 logs.go:282] 0 containers: []
	W1212 01:40:09.433383  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:09.433389  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:09.433450  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:09.458129  291455 cri.go:89] found id: ""
	I1212 01:40:09.458149  291455 logs.go:282] 0 containers: []
	W1212 01:40:09.458158  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:09.458165  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:09.458222  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:09.484528  291455 cri.go:89] found id: ""
	I1212 01:40:09.484552  291455 logs.go:282] 0 containers: []
	W1212 01:40:09.484560  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:09.484567  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:09.484624  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:09.512777  291455 cri.go:89] found id: ""
	I1212 01:40:09.512802  291455 logs.go:282] 0 containers: []
	W1212 01:40:09.512811  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:09.512820  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:09.512831  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:09.563517  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:09.563545  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:09.660558  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:09.660595  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:09.674516  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:09.674541  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:09.738215  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:09.730040   10697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:09.730861   10697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:09.732394   10697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:09.732881   10697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:09.734347   10697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:09.730040   10697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:09.730861   10697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:09.732394   10697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:09.732881   10697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:09.734347   10697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:09.738241  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:09.738253  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:12.263748  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:12.273959  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:12.274029  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:12.297055  291455 cri.go:89] found id: ""
	I1212 01:40:12.297087  291455 logs.go:282] 0 containers: []
	W1212 01:40:12.297096  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:12.297118  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:12.297179  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:12.322284  291455 cri.go:89] found id: ""
	I1212 01:40:12.322308  291455 logs.go:282] 0 containers: []
	W1212 01:40:12.322317  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:12.322323  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:12.322397  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:12.345905  291455 cri.go:89] found id: ""
	I1212 01:40:12.345929  291455 logs.go:282] 0 containers: []
	W1212 01:40:12.345938  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:12.345944  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:12.346024  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:12.370571  291455 cri.go:89] found id: ""
	I1212 01:40:12.370593  291455 logs.go:282] 0 containers: []
	W1212 01:40:12.370602  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:12.370608  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:12.370695  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:12.397426  291455 cri.go:89] found id: ""
	I1212 01:40:12.397473  291455 logs.go:282] 0 containers: []
	W1212 01:40:12.397495  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:12.397514  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:12.397602  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:12.426531  291455 cri.go:89] found id: ""
	I1212 01:40:12.426556  291455 logs.go:282] 0 containers: []
	W1212 01:40:12.426564  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:12.426571  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:12.426644  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:12.450837  291455 cri.go:89] found id: ""
	I1212 01:40:12.450864  291455 logs.go:282] 0 containers: []
	W1212 01:40:12.450874  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:12.450882  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:12.450941  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:12.475392  291455 cri.go:89] found id: ""
	I1212 01:40:12.475415  291455 logs.go:282] 0 containers: []
	W1212 01:40:12.475423  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:12.475433  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:12.475443  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:12.500596  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:12.500630  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:12.539878  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:12.539912  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:12.636980  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:12.637024  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:12.651233  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:12.651261  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:12.719321  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:12.710320   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:12.711168   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:12.712905   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:12.713556   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:12.715342   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:12.710320   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:12.711168   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:12.712905   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:12.713556   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:12.715342   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:15.219607  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:15.230736  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:15.230837  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:15.255192  291455 cri.go:89] found id: ""
	I1212 01:40:15.255216  291455 logs.go:282] 0 containers: []
	W1212 01:40:15.255225  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:15.255250  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:15.255312  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:15.280065  291455 cri.go:89] found id: ""
	I1212 01:40:15.280088  291455 logs.go:282] 0 containers: []
	W1212 01:40:15.280097  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:15.280103  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:15.280182  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:15.305428  291455 cri.go:89] found id: ""
	I1212 01:40:15.305451  291455 logs.go:282] 0 containers: []
	W1212 01:40:15.305460  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:15.305467  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:15.305533  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:15.329513  291455 cri.go:89] found id: ""
	I1212 01:40:15.329537  291455 logs.go:282] 0 containers: []
	W1212 01:40:15.329545  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:15.329552  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:15.329612  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:15.353724  291455 cri.go:89] found id: ""
	I1212 01:40:15.353748  291455 logs.go:282] 0 containers: []
	W1212 01:40:15.353757  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:15.353764  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:15.353821  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:15.379891  291455 cri.go:89] found id: ""
	I1212 01:40:15.379921  291455 logs.go:282] 0 containers: []
	W1212 01:40:15.379930  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:15.379936  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:15.379994  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:15.410206  291455 cri.go:89] found id: ""
	I1212 01:40:15.410232  291455 logs.go:282] 0 containers: []
	W1212 01:40:15.410242  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:15.410249  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:15.410308  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:15.436574  291455 cri.go:89] found id: ""
	I1212 01:40:15.436607  291455 logs.go:282] 0 containers: []
	W1212 01:40:15.436616  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:15.436628  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:15.436640  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:15.496631  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:15.496672  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:15.511586  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:15.511614  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:15.643166  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:15.635198   10905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:15.635698   10905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:15.637279   10905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:15.637833   10905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:15.639441   10905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:15.635198   10905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:15.635698   10905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:15.637279   10905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:15.637833   10905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:15.639441   10905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:15.643192  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:15.643208  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:15.668006  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:15.668044  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:18.199232  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:18.210162  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:18.210237  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:18.235304  291455 cri.go:89] found id: ""
	I1212 01:40:18.235330  291455 logs.go:282] 0 containers: []
	W1212 01:40:18.235339  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:18.235347  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:18.235412  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:18.261126  291455 cri.go:89] found id: ""
	I1212 01:40:18.261149  291455 logs.go:282] 0 containers: []
	W1212 01:40:18.261157  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:18.261163  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:18.261225  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:18.285920  291455 cri.go:89] found id: ""
	I1212 01:40:18.285946  291455 logs.go:282] 0 containers: []
	W1212 01:40:18.285954  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:18.285961  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:18.286056  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:18.310447  291455 cri.go:89] found id: ""
	I1212 01:40:18.310490  291455 logs.go:282] 0 containers: []
	W1212 01:40:18.310500  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:18.310523  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:18.310601  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:18.334613  291455 cri.go:89] found id: ""
	I1212 01:40:18.334643  291455 logs.go:282] 0 containers: []
	W1212 01:40:18.334653  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:18.334659  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:18.334725  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:18.363763  291455 cri.go:89] found id: ""
	I1212 01:40:18.363787  291455 logs.go:282] 0 containers: []
	W1212 01:40:18.363797  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:18.363803  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:18.363864  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:18.389696  291455 cri.go:89] found id: ""
	I1212 01:40:18.389730  291455 logs.go:282] 0 containers: []
	W1212 01:40:18.389739  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:18.389745  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:18.389812  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:18.416961  291455 cri.go:89] found id: ""
	I1212 01:40:18.417035  291455 logs.go:282] 0 containers: []
	W1212 01:40:18.417059  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:18.417077  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:18.417104  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:18.474235  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:18.474268  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:18.487640  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:18.487666  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:18.567561  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:18.554594   11020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:18.555595   11020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:18.560540   11020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:18.560843   11020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:18.562408   11020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:18.554594   11020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:18.555595   11020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:18.560540   11020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:18.560843   11020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:18.562408   11020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:18.567584  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:18.567597  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:18.597523  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:18.597557  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:21.132296  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:21.142685  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:21.142760  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:21.171993  291455 cri.go:89] found id: ""
	I1212 01:40:21.172020  291455 logs.go:282] 0 containers: []
	W1212 01:40:21.172029  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:21.172035  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:21.172096  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:21.195907  291455 cri.go:89] found id: ""
	I1212 01:40:21.195929  291455 logs.go:282] 0 containers: []
	W1212 01:40:21.195938  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:21.195944  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:21.196007  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:21.219496  291455 cri.go:89] found id: ""
	I1212 01:40:21.219524  291455 logs.go:282] 0 containers: []
	W1212 01:40:21.219533  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:21.219540  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:21.219601  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:21.243807  291455 cri.go:89] found id: ""
	I1212 01:40:21.243834  291455 logs.go:282] 0 containers: []
	W1212 01:40:21.243844  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:21.243850  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:21.243910  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:21.268956  291455 cri.go:89] found id: ""
	I1212 01:40:21.268977  291455 logs.go:282] 0 containers: []
	W1212 01:40:21.268986  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:21.268993  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:21.269052  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:21.297557  291455 cri.go:89] found id: ""
	I1212 01:40:21.297580  291455 logs.go:282] 0 containers: []
	W1212 01:40:21.297588  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:21.297595  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:21.297652  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:21.321755  291455 cri.go:89] found id: ""
	I1212 01:40:21.321776  291455 logs.go:282] 0 containers: []
	W1212 01:40:21.321791  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:21.321798  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:21.321861  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:21.349054  291455 cri.go:89] found id: ""
	I1212 01:40:21.349076  291455 logs.go:282] 0 containers: []
	W1212 01:40:21.349085  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:21.349094  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:21.349108  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:21.374597  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:21.374636  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:21.403444  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:21.403469  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:21.461656  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:21.461690  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:21.475293  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:21.475320  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:21.560836  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:21.545907   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:21.546745   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:21.548429   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:21.548732   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:21.550543   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:21.545907   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:21.546745   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:21.548429   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:21.548732   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:21.550543   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:24.061094  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:24.071831  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:24.071913  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:24.097936  291455 cri.go:89] found id: ""
	I1212 01:40:24.097962  291455 logs.go:282] 0 containers: []
	W1212 01:40:24.097971  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:24.097978  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:24.098036  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:24.127785  291455 cri.go:89] found id: ""
	I1212 01:40:24.127809  291455 logs.go:282] 0 containers: []
	W1212 01:40:24.127819  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:24.127826  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:24.127889  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:24.153026  291455 cri.go:89] found id: ""
	I1212 01:40:24.153052  291455 logs.go:282] 0 containers: []
	W1212 01:40:24.153063  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:24.153068  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:24.153127  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:24.176972  291455 cri.go:89] found id: ""
	I1212 01:40:24.176997  291455 logs.go:282] 0 containers: []
	W1212 01:40:24.177006  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:24.177013  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:24.177073  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:24.213590  291455 cri.go:89] found id: ""
	I1212 01:40:24.213614  291455 logs.go:282] 0 containers: []
	W1212 01:40:24.213623  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:24.213638  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:24.213696  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:24.241058  291455 cri.go:89] found id: ""
	I1212 01:40:24.241084  291455 logs.go:282] 0 containers: []
	W1212 01:40:24.241092  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:24.241099  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:24.241158  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:24.265936  291455 cri.go:89] found id: ""
	I1212 01:40:24.265977  291455 logs.go:282] 0 containers: []
	W1212 01:40:24.265985  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:24.265991  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:24.266050  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:24.289751  291455 cri.go:89] found id: ""
	I1212 01:40:24.289779  291455 logs.go:282] 0 containers: []
	W1212 01:40:24.289788  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:24.289798  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:24.289809  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:24.316973  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:24.316999  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:24.372346  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:24.372380  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:24.385931  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:24.385960  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:24.453792  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:24.445261   11257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:24.445682   11257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:24.447332   11257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:24.447939   11257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:24.449784   11257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:24.445261   11257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:24.445682   11257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:24.447332   11257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:24.447939   11257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:24.449784   11257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:24.453813  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:24.453826  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:26.980134  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:26.991597  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:26.991671  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:27.019040  291455 cri.go:89] found id: ""
	I1212 01:40:27.019064  291455 logs.go:282] 0 containers: []
	W1212 01:40:27.019073  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:27.019080  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:27.019154  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:27.046812  291455 cri.go:89] found id: ""
	I1212 01:40:27.046841  291455 logs.go:282] 0 containers: []
	W1212 01:40:27.046854  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:27.046860  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:27.046968  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:27.071383  291455 cri.go:89] found id: ""
	I1212 01:40:27.071405  291455 logs.go:282] 0 containers: []
	W1212 01:40:27.071414  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:27.071420  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:27.071490  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:27.095638  291455 cri.go:89] found id: ""
	I1212 01:40:27.095663  291455 logs.go:282] 0 containers: []
	W1212 01:40:27.095672  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:27.095678  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:27.095755  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:27.119028  291455 cri.go:89] found id: ""
	I1212 01:40:27.119050  291455 logs.go:282] 0 containers: []
	W1212 01:40:27.119059  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:27.119064  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:27.119123  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:27.143722  291455 cri.go:89] found id: ""
	I1212 01:40:27.143748  291455 logs.go:282] 0 containers: []
	W1212 01:40:27.143757  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:27.143763  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:27.143839  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:27.167989  291455 cri.go:89] found id: ""
	I1212 01:40:27.168066  291455 logs.go:282] 0 containers: []
	W1212 01:40:27.168088  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:27.168097  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:27.168168  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:27.193229  291455 cri.go:89] found id: ""
	I1212 01:40:27.193269  291455 logs.go:282] 0 containers: []
	W1212 01:40:27.193279  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:27.193289  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:27.193304  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:27.248752  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:27.248788  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:27.262591  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:27.262627  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:27.329086  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:27.321229   11360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:27.321673   11360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:27.323243   11360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:27.323775   11360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:27.325351   11360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:27.321229   11360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:27.321673   11360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:27.323243   11360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:27.323775   11360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:27.325351   11360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:27.329111  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:27.329123  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:27.354405  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:27.354442  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:29.885003  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:29.896299  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:29.896378  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:29.922911  291455 cri.go:89] found id: ""
	I1212 01:40:29.922945  291455 logs.go:282] 0 containers: []
	W1212 01:40:29.922954  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:29.922961  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:29.923063  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:29.949238  291455 cri.go:89] found id: ""
	I1212 01:40:29.949264  291455 logs.go:282] 0 containers: []
	W1212 01:40:29.949273  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:29.949280  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:29.949338  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:29.974510  291455 cri.go:89] found id: ""
	I1212 01:40:29.974536  291455 logs.go:282] 0 containers: []
	W1212 01:40:29.974545  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:29.974551  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:29.974608  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:29.999116  291455 cri.go:89] found id: ""
	I1212 01:40:29.999142  291455 logs.go:282] 0 containers: []
	W1212 01:40:29.999151  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:29.999157  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:29.999223  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:30.078011  291455 cri.go:89] found id: ""
	I1212 01:40:30.078040  291455 logs.go:282] 0 containers: []
	W1212 01:40:30.078050  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:30.078058  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:30.078132  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:30.105966  291455 cri.go:89] found id: ""
	I1212 01:40:30.105993  291455 logs.go:282] 0 containers: []
	W1212 01:40:30.106003  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:30.106010  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:30.106078  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:30.134703  291455 cri.go:89] found id: ""
	I1212 01:40:30.134726  291455 logs.go:282] 0 containers: []
	W1212 01:40:30.134735  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:30.134780  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:30.134874  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:30.161984  291455 cri.go:89] found id: ""
	I1212 01:40:30.162009  291455 logs.go:282] 0 containers: []
	W1212 01:40:30.162018  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:30.162028  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:30.162039  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:30.193075  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:30.193103  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:30.252472  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:30.252508  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:30.266246  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:30.266276  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:30.333852  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:30.325323   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:30.325890   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:30.327426   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:30.327865   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:30.329291   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:30.325323   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:30.325890   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:30.327426   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:30.327865   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:30.329291   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:30.333874  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:30.333886  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:32.860948  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:32.872085  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:32.872163  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:32.901386  291455 cri.go:89] found id: ""
	I1212 01:40:32.901410  291455 logs.go:282] 0 containers: []
	W1212 01:40:32.901425  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:32.901438  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:32.901499  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:32.926818  291455 cri.go:89] found id: ""
	I1212 01:40:32.926844  291455 logs.go:282] 0 containers: []
	W1212 01:40:32.926853  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:32.926859  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:32.926927  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:32.956149  291455 cri.go:89] found id: ""
	I1212 01:40:32.956187  291455 logs.go:282] 0 containers: []
	W1212 01:40:32.956196  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:32.956202  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:32.956259  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:32.988134  291455 cri.go:89] found id: ""
	I1212 01:40:32.988159  291455 logs.go:282] 0 containers: []
	W1212 01:40:32.988168  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:32.988174  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:32.988231  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:33.014432  291455 cri.go:89] found id: ""
	I1212 01:40:33.014459  291455 logs.go:282] 0 containers: []
	W1212 01:40:33.014468  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:33.014474  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:33.014534  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:33.039814  291455 cri.go:89] found id: ""
	I1212 01:40:33.039843  291455 logs.go:282] 0 containers: []
	W1212 01:40:33.039852  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:33.039859  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:33.039921  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:33.068378  291455 cri.go:89] found id: ""
	I1212 01:40:33.068401  291455 logs.go:282] 0 containers: []
	W1212 01:40:33.068410  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:33.068417  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:33.068475  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:33.097661  291455 cri.go:89] found id: ""
	I1212 01:40:33.097725  291455 logs.go:282] 0 containers: []
	W1212 01:40:33.097750  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:33.097775  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:33.097803  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:33.129775  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:33.129802  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:33.189298  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:33.189332  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:33.202981  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:33.203026  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:33.264626  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:33.256228   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:33.256801   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:33.258449   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:33.259112   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:33.260717   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:33.256228   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:33.256801   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:33.258449   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:33.259112   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:33.260717   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:33.264648  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:33.264665  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:35.791109  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:35.807877  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:35.807951  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:35.851416  291455 cri.go:89] found id: ""
	I1212 01:40:35.851442  291455 logs.go:282] 0 containers: []
	W1212 01:40:35.851450  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:35.851456  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:35.851518  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:35.888920  291455 cri.go:89] found id: ""
	I1212 01:40:35.888943  291455 logs.go:282] 0 containers: []
	W1212 01:40:35.888952  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:35.888958  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:35.889018  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:35.915592  291455 cri.go:89] found id: ""
	I1212 01:40:35.915618  291455 logs.go:282] 0 containers: []
	W1212 01:40:35.915628  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:35.915634  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:35.915715  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:35.939272  291455 cri.go:89] found id: ""
	I1212 01:40:35.939296  291455 logs.go:282] 0 containers: []
	W1212 01:40:35.939305  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:35.939311  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:35.939370  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:35.968216  291455 cri.go:89] found id: ""
	I1212 01:40:35.968244  291455 logs.go:282] 0 containers: []
	W1212 01:40:35.968253  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:35.968259  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:35.968317  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:35.993761  291455 cri.go:89] found id: ""
	I1212 01:40:35.993785  291455 logs.go:282] 0 containers: []
	W1212 01:40:35.993796  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:35.993803  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:35.993863  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:36.022585  291455 cri.go:89] found id: ""
	I1212 01:40:36.022612  291455 logs.go:282] 0 containers: []
	W1212 01:40:36.022633  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:36.022640  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:36.022712  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:36.052933  291455 cri.go:89] found id: ""
	I1212 01:40:36.052955  291455 logs.go:282] 0 containers: []
	W1212 01:40:36.052965  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:36.052974  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:36.052991  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:36.122317  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:36.113883   11700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:36.114412   11700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:36.115894   11700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:36.116408   11700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:36.118260   11700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:36.113883   11700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:36.114412   11700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:36.115894   11700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:36.116408   11700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:36.118260   11700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:36.122340  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:36.122353  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:36.146907  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:36.146940  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:36.174411  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:36.174444  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:36.229229  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:36.229259  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:38.742843  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:38.753061  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:38.753132  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:38.777998  291455 cri.go:89] found id: ""
	I1212 01:40:38.778024  291455 logs.go:282] 0 containers: []
	W1212 01:40:38.778033  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:38.778039  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:38.778098  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:38.819601  291455 cri.go:89] found id: ""
	I1212 01:40:38.819630  291455 logs.go:282] 0 containers: []
	W1212 01:40:38.819639  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:38.819649  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:38.819726  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:38.863492  291455 cri.go:89] found id: ""
	I1212 01:40:38.863555  291455 logs.go:282] 0 containers: []
	W1212 01:40:38.863567  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:38.863574  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:38.863640  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:38.896081  291455 cri.go:89] found id: ""
	I1212 01:40:38.896109  291455 logs.go:282] 0 containers: []
	W1212 01:40:38.896118  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:38.896124  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:38.896189  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:38.923782  291455 cri.go:89] found id: ""
	I1212 01:40:38.923824  291455 logs.go:282] 0 containers: []
	W1212 01:40:38.923832  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:38.923838  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:38.923896  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:38.948257  291455 cri.go:89] found id: ""
	I1212 01:40:38.948289  291455 logs.go:282] 0 containers: []
	W1212 01:40:38.948305  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:38.948312  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:38.948379  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:38.974066  291455 cri.go:89] found id: ""
	I1212 01:40:38.974090  291455 logs.go:282] 0 containers: []
	W1212 01:40:38.974098  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:38.974104  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:38.974163  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:38.999566  291455 cri.go:89] found id: ""
	I1212 01:40:38.999654  291455 logs.go:282] 0 containers: []
	W1212 01:40:38.999670  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:38.999681  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:38.999693  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:39.032809  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:39.032845  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:39.061204  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:39.061234  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:39.116485  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:39.116516  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:39.129984  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:39.130014  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:39.195391  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:39.187100   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:39.187857   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:39.189545   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:39.190069   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:39.191706   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:39.187100   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:39.187857   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:39.189545   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:39.190069   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:39.191706   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:41.695676  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:41.707011  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:41.707085  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:41.731224  291455 cri.go:89] found id: ""
	I1212 01:40:41.731295  291455 logs.go:282] 0 containers: []
	W1212 01:40:41.731318  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:41.731337  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:41.731422  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:41.759193  291455 cri.go:89] found id: ""
	I1212 01:40:41.759266  291455 logs.go:282] 0 containers: []
	W1212 01:40:41.759289  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:41.759308  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:41.759394  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:41.793923  291455 cri.go:89] found id: ""
	I1212 01:40:41.793994  291455 logs.go:282] 0 containers: []
	W1212 01:40:41.794017  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:41.794038  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:41.794121  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:41.844183  291455 cri.go:89] found id: ""
	I1212 01:40:41.844246  291455 logs.go:282] 0 containers: []
	W1212 01:40:41.844277  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:41.844297  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:41.844405  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:41.880181  291455 cri.go:89] found id: ""
	I1212 01:40:41.880253  291455 logs.go:282] 0 containers: []
	W1212 01:40:41.880288  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:41.880312  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:41.880412  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:41.908685  291455 cri.go:89] found id: ""
	I1212 01:40:41.908760  291455 logs.go:282] 0 containers: []
	W1212 01:40:41.908776  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:41.908783  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:41.908840  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:41.933232  291455 cri.go:89] found id: ""
	I1212 01:40:41.933257  291455 logs.go:282] 0 containers: []
	W1212 01:40:41.933265  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:41.933272  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:41.933361  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:41.957941  291455 cri.go:89] found id: ""
	I1212 01:40:41.957966  291455 logs.go:282] 0 containers: []
	W1212 01:40:41.957975  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:41.957993  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:41.958004  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:42.012839  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:42.012878  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:42.028378  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:42.028410  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:42.099435  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:42.089806   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:42.091059   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:42.092313   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:42.093469   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:42.094522   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:42.089806   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:42.091059   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:42.092313   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:42.093469   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:42.094522   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:42.099461  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:42.099477  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:42.127956  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:42.127997  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:44.666695  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:44.677340  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:44.677417  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:44.701562  291455 cri.go:89] found id: ""
	I1212 01:40:44.701585  291455 logs.go:282] 0 containers: []
	W1212 01:40:44.701594  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:44.701600  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:44.701657  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:44.726430  291455 cri.go:89] found id: ""
	I1212 01:40:44.726452  291455 logs.go:282] 0 containers: []
	W1212 01:40:44.726460  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:44.726466  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:44.726555  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:44.755275  291455 cri.go:89] found id: ""
	I1212 01:40:44.755298  291455 logs.go:282] 0 containers: []
	W1212 01:40:44.755306  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:44.755312  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:44.755367  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:44.780079  291455 cri.go:89] found id: ""
	I1212 01:40:44.780105  291455 logs.go:282] 0 containers: []
	W1212 01:40:44.780114  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:44.780120  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:44.780194  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:44.869405  291455 cri.go:89] found id: ""
	I1212 01:40:44.869429  291455 logs.go:282] 0 containers: []
	W1212 01:40:44.869437  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:44.869444  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:44.869510  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:44.895160  291455 cri.go:89] found id: ""
	I1212 01:40:44.895186  291455 logs.go:282] 0 containers: []
	W1212 01:40:44.895195  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:44.895201  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:44.895258  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:44.919698  291455 cri.go:89] found id: ""
	I1212 01:40:44.919721  291455 logs.go:282] 0 containers: []
	W1212 01:40:44.919730  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:44.919736  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:44.919792  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:44.944054  291455 cri.go:89] found id: ""
	I1212 01:40:44.944076  291455 logs.go:282] 0 containers: []
	W1212 01:40:44.944085  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:44.944093  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:44.944104  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:44.968670  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:44.968701  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:44.997722  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:44.997750  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:45.076118  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:45.076163  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:45.092613  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:45.092646  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:45.185594  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:45.175075   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:45.176253   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:45.177119   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:45.179849   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:45.180652   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:45.175075   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:45.176253   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:45.177119   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:45.179849   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:45.180652   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:47.686812  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:47.697462  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:47.697534  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:47.725301  291455 cri.go:89] found id: ""
	I1212 01:40:47.725327  291455 logs.go:282] 0 containers: []
	W1212 01:40:47.725336  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:47.725342  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:47.725406  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:47.750015  291455 cri.go:89] found id: ""
	I1212 01:40:47.750040  291455 logs.go:282] 0 containers: []
	W1212 01:40:47.750050  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:47.750057  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:47.750116  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:47.774576  291455 cri.go:89] found id: ""
	I1212 01:40:47.774604  291455 logs.go:282] 0 containers: []
	W1212 01:40:47.774613  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:47.774620  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:47.774679  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:47.823337  291455 cri.go:89] found id: ""
	I1212 01:40:47.823365  291455 logs.go:282] 0 containers: []
	W1212 01:40:47.823374  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:47.823381  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:47.823451  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:47.863754  291455 cri.go:89] found id: ""
	I1212 01:40:47.863776  291455 logs.go:282] 0 containers: []
	W1212 01:40:47.863785  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:47.863791  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:47.863851  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:47.892358  291455 cri.go:89] found id: ""
	I1212 01:40:47.892383  291455 logs.go:282] 0 containers: []
	W1212 01:40:47.892391  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:47.892398  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:47.892463  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:47.916778  291455 cri.go:89] found id: ""
	I1212 01:40:47.916805  291455 logs.go:282] 0 containers: []
	W1212 01:40:47.916815  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:47.916821  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:47.916900  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:47.942154  291455 cri.go:89] found id: ""
	I1212 01:40:47.942177  291455 logs.go:282] 0 containers: []
	W1212 01:40:47.942185  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:47.942194  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:47.942208  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:47.955644  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:47.955725  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:48.027299  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:48.016837   12155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:48.017641   12155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:48.019747   12155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:48.020636   12155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:48.022832   12155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:48.016837   12155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:48.017641   12155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:48.019747   12155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:48.020636   12155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:48.022832   12155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:48.027326  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:48.027340  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:48.052933  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:48.052970  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:48.089641  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:48.089674  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:50.649196  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:50.660069  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:50.660143  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:50.685271  291455 cri.go:89] found id: ""
	I1212 01:40:50.685299  291455 logs.go:282] 0 containers: []
	W1212 01:40:50.685309  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:50.685316  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:50.685378  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:50.712999  291455 cri.go:89] found id: ""
	I1212 01:40:50.713025  291455 logs.go:282] 0 containers: []
	W1212 01:40:50.713034  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:50.713040  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:50.713099  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:50.737720  291455 cri.go:89] found id: ""
	I1212 01:40:50.737745  291455 logs.go:282] 0 containers: []
	W1212 01:40:50.737754  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:50.737761  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:50.737828  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:50.763261  291455 cri.go:89] found id: ""
	I1212 01:40:50.763286  291455 logs.go:282] 0 containers: []
	W1212 01:40:50.763294  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:50.763300  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:50.763358  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:50.811665  291455 cri.go:89] found id: ""
	I1212 01:40:50.811692  291455 logs.go:282] 0 containers: []
	W1212 01:40:50.811701  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:50.811707  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:50.811768  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:50.870884  291455 cri.go:89] found id: ""
	I1212 01:40:50.870909  291455 logs.go:282] 0 containers: []
	W1212 01:40:50.870921  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:50.870927  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:50.870986  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:50.896362  291455 cri.go:89] found id: ""
	I1212 01:40:50.896387  291455 logs.go:282] 0 containers: []
	W1212 01:40:50.896395  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:50.896401  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:50.896457  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:50.924933  291455 cri.go:89] found id: ""
	I1212 01:40:50.924956  291455 logs.go:282] 0 containers: []
	W1212 01:40:50.924964  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:50.924974  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:50.924986  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:50.982505  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:50.982537  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:50.996444  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:50.996467  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:51.075810  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:51.067309   12269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:51.068132   12269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:51.069797   12269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:51.070277   12269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:51.071678   12269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:51.067309   12269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:51.068132   12269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:51.069797   12269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:51.070277   12269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:51.071678   12269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:51.075896  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:51.075929  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:51.100541  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:51.100577  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:53.629887  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:53.640204  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:53.640274  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:53.665408  291455 cri.go:89] found id: ""
	I1212 01:40:53.665487  291455 logs.go:282] 0 containers: []
	W1212 01:40:53.665511  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:53.665531  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:53.665616  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:53.693593  291455 cri.go:89] found id: ""
	I1212 01:40:53.693620  291455 logs.go:282] 0 containers: []
	W1212 01:40:53.693629  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:53.693635  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:53.693693  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:53.717209  291455 cri.go:89] found id: ""
	I1212 01:40:53.717234  291455 logs.go:282] 0 containers: []
	W1212 01:40:53.717243  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:53.717249  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:53.717305  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:53.742008  291455 cri.go:89] found id: ""
	I1212 01:40:53.742033  291455 logs.go:282] 0 containers: []
	W1212 01:40:53.742042  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:53.742049  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:53.742106  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:53.766463  291455 cri.go:89] found id: ""
	I1212 01:40:53.766489  291455 logs.go:282] 0 containers: []
	W1212 01:40:53.766498  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:53.766505  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:53.766562  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:53.832090  291455 cri.go:89] found id: ""
	I1212 01:40:53.832118  291455 logs.go:282] 0 containers: []
	W1212 01:40:53.832133  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:53.832140  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:53.832201  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:53.877395  291455 cri.go:89] found id: ""
	I1212 01:40:53.877422  291455 logs.go:282] 0 containers: []
	W1212 01:40:53.877431  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:53.877438  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:53.877497  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:53.905857  291455 cri.go:89] found id: ""
	I1212 01:40:53.905883  291455 logs.go:282] 0 containers: []
	W1212 01:40:53.905891  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:53.905900  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:53.905912  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:53.936211  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:53.936236  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:53.990768  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:53.990801  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:54.005707  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:54.005751  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:54.077323  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:54.068912   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:54.069627   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:54.071278   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:54.071804   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:54.073346   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:54.068912   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:54.069627   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:54.071278   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:54.071804   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:54.073346   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:54.077345  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:54.077361  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:56.603783  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:56.614362  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:56.614437  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:56.639205  291455 cri.go:89] found id: ""
	I1212 01:40:56.639230  291455 logs.go:282] 0 containers: []
	W1212 01:40:56.639239  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:56.639245  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:56.639302  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:56.664961  291455 cri.go:89] found id: ""
	I1212 01:40:56.664983  291455 logs.go:282] 0 containers: []
	W1212 01:40:56.664991  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:56.664997  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:56.665055  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:56.689125  291455 cri.go:89] found id: ""
	I1212 01:40:56.689148  291455 logs.go:282] 0 containers: []
	W1212 01:40:56.689163  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:56.689169  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:56.689228  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:56.713944  291455 cri.go:89] found id: ""
	I1212 01:40:56.713969  291455 logs.go:282] 0 containers: []
	W1212 01:40:56.713977  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:56.713984  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:56.714045  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:56.742503  291455 cri.go:89] found id: ""
	I1212 01:40:56.742536  291455 logs.go:282] 0 containers: []
	W1212 01:40:56.742546  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:56.742552  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:56.742610  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:56.768074  291455 cri.go:89] found id: ""
	I1212 01:40:56.768101  291455 logs.go:282] 0 containers: []
	W1212 01:40:56.768110  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:56.768116  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:56.768176  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:56.822219  291455 cri.go:89] found id: ""
	I1212 01:40:56.822241  291455 logs.go:282] 0 containers: []
	W1212 01:40:56.822250  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:56.822256  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:56.822326  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:56.877551  291455 cri.go:89] found id: ""
	I1212 01:40:56.877579  291455 logs.go:282] 0 containers: []
	W1212 01:40:56.877588  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:56.877598  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:56.877609  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:56.951400  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:56.942725   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:56.943403   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:56.945223   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:56.945864   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:56.947463   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:56.942725   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:56.943403   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:56.945223   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:56.945864   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:56.947463   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:56.951423  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:56.951435  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:56.976432  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:56.976471  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:57.016067  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:57.016095  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:57.076530  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:57.076562  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:59.590650  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:59.601442  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:59.601513  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:59.627392  291455 cri.go:89] found id: ""
	I1212 01:40:59.627418  291455 logs.go:282] 0 containers: []
	W1212 01:40:59.627426  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:59.627433  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:59.627492  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:59.652525  291455 cri.go:89] found id: ""
	I1212 01:40:59.652546  291455 logs.go:282] 0 containers: []
	W1212 01:40:59.652555  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:59.652560  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:59.652620  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:59.677515  291455 cri.go:89] found id: ""
	I1212 01:40:59.677538  291455 logs.go:282] 0 containers: []
	W1212 01:40:59.677546  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:59.677551  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:59.677609  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:59.701508  291455 cri.go:89] found id: ""
	I1212 01:40:59.701531  291455 logs.go:282] 0 containers: []
	W1212 01:40:59.701539  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:59.701545  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:59.701602  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:59.726132  291455 cri.go:89] found id: ""
	I1212 01:40:59.726154  291455 logs.go:282] 0 containers: []
	W1212 01:40:59.726162  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:59.726168  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:59.726228  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:59.751581  291455 cri.go:89] found id: ""
	I1212 01:40:59.751608  291455 logs.go:282] 0 containers: []
	W1212 01:40:59.751617  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:59.751625  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:59.751682  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:59.780780  291455 cri.go:89] found id: ""
	I1212 01:40:59.780805  291455 logs.go:282] 0 containers: []
	W1212 01:40:59.780825  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:59.780836  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:59.780901  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:59.866401  291455 cri.go:89] found id: ""
	I1212 01:40:59.866424  291455 logs.go:282] 0 containers: []
	W1212 01:40:59.866433  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:59.866442  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:59.866453  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:59.921825  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:59.921862  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:59.935338  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:59.935366  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:59.999474  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:59.992159   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:59.992558   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:59.993995   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:59.994293   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:59.995686   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:59.992159   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:59.992558   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:59.993995   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:59.994293   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:59.995686   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:59.999546  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:59.999574  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:41:00.079868  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:41:00.084769  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:41:02.719157  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:41:02.730262  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:41:02.730335  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:41:02.756172  291455 cri.go:89] found id: ""
	I1212 01:41:02.756196  291455 logs.go:282] 0 containers: []
	W1212 01:41:02.756206  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:41:02.756213  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:41:02.756272  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:41:02.792420  291455 cri.go:89] found id: ""
	I1212 01:41:02.792445  291455 logs.go:282] 0 containers: []
	W1212 01:41:02.792455  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:41:02.792461  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:41:02.792531  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:41:02.838813  291455 cri.go:89] found id: ""
	I1212 01:41:02.838841  291455 logs.go:282] 0 containers: []
	W1212 01:41:02.838849  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:41:02.838856  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:41:02.838918  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:41:02.886478  291455 cri.go:89] found id: ""
	I1212 01:41:02.886504  291455 logs.go:282] 0 containers: []
	W1212 01:41:02.886513  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:41:02.886523  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:41:02.886580  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:41:02.914286  291455 cri.go:89] found id: ""
	I1212 01:41:02.914309  291455 logs.go:282] 0 containers: []
	W1212 01:41:02.914318  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:41:02.914333  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:41:02.914403  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:41:02.939527  291455 cri.go:89] found id: ""
	I1212 01:41:02.939550  291455 logs.go:282] 0 containers: []
	W1212 01:41:02.939559  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:41:02.939565  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:41:02.939624  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:41:02.965321  291455 cri.go:89] found id: ""
	I1212 01:41:02.965345  291455 logs.go:282] 0 containers: []
	W1212 01:41:02.965354  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:41:02.965360  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:41:02.965423  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:41:02.991292  291455 cri.go:89] found id: ""
	I1212 01:41:02.991316  291455 logs.go:282] 0 containers: []
	W1212 01:41:02.991325  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:41:02.991341  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:41:02.991352  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:41:03.019527  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:41:03.019562  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:41:03.051852  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:41:03.051878  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:41:03.107633  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:41:03.107667  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:41:03.121349  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:41:03.121375  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:41:03.186261  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:41:03.177889   12737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:03.178763   12737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:03.180270   12737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:03.180822   12737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:03.182351   12737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:41:03.177889   12737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:03.178763   12737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:03.180270   12737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:03.180822   12737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:03.182351   12737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:41:05.687947  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:41:05.698808  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:41:05.698883  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:41:05.724019  291455 cri.go:89] found id: ""
	I1212 01:41:05.724043  291455 logs.go:282] 0 containers: []
	W1212 01:41:05.724052  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:41:05.724058  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:41:05.724115  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:41:05.752813  291455 cri.go:89] found id: ""
	I1212 01:41:05.752838  291455 logs.go:282] 0 containers: []
	W1212 01:41:05.752847  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:41:05.752853  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:41:05.752917  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:41:05.777122  291455 cri.go:89] found id: ""
	I1212 01:41:05.777144  291455 logs.go:282] 0 containers: []
	W1212 01:41:05.777152  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:41:05.777158  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:41:05.777215  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:41:05.833235  291455 cri.go:89] found id: ""
	I1212 01:41:05.833260  291455 logs.go:282] 0 containers: []
	W1212 01:41:05.833270  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:41:05.833276  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:41:05.833350  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:41:05.880483  291455 cri.go:89] found id: ""
	I1212 01:41:05.880506  291455 logs.go:282] 0 containers: []
	W1212 01:41:05.880514  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:41:05.880520  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:41:05.880583  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:41:05.904810  291455 cri.go:89] found id: ""
	I1212 01:41:05.904834  291455 logs.go:282] 0 containers: []
	W1212 01:41:05.904843  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:41:05.904849  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:41:05.904906  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:41:05.936458  291455 cri.go:89] found id: ""
	I1212 01:41:05.936482  291455 logs.go:282] 0 containers: []
	W1212 01:41:05.936491  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:41:05.936497  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:41:05.936585  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:41:05.965168  291455 cri.go:89] found id: ""
	I1212 01:41:05.965193  291455 logs.go:282] 0 containers: []
	W1212 01:41:05.965202  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:41:05.965212  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:41:05.965225  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:41:06.022621  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:41:06.022674  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:41:06.036897  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:41:06.036926  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:41:06.105481  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:41:06.097089   12840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:06.097938   12840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:06.099584   12840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:06.099907   12840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:06.101467   12840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:41:06.097089   12840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:06.097938   12840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:06.099584   12840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:06.099907   12840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:06.101467   12840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:41:06.105505  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:41:06.105518  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:41:06.131153  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:41:06.131186  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:41:08.659864  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:41:08.670811  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:41:08.670881  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:41:08.694882  291455 cri.go:89] found id: ""
	I1212 01:41:08.694903  291455 logs.go:282] 0 containers: []
	W1212 01:41:08.694911  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:41:08.694917  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:41:08.694976  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:41:08.719560  291455 cri.go:89] found id: ""
	I1212 01:41:08.719590  291455 logs.go:282] 0 containers: []
	W1212 01:41:08.719598  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:41:08.719605  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:41:08.719662  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:41:08.744076  291455 cri.go:89] found id: ""
	I1212 01:41:08.744103  291455 logs.go:282] 0 containers: []
	W1212 01:41:08.744113  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:41:08.744119  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:41:08.744177  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:41:08.772960  291455 cri.go:89] found id: ""
	I1212 01:41:08.772985  291455 logs.go:282] 0 containers: []
	W1212 01:41:08.772994  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:41:08.773001  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:41:08.773080  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:41:08.815633  291455 cri.go:89] found id: ""
	I1212 01:41:08.815659  291455 logs.go:282] 0 containers: []
	W1212 01:41:08.815668  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:41:08.815674  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:41:08.815742  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:41:08.878320  291455 cri.go:89] found id: ""
	I1212 01:41:08.878345  291455 logs.go:282] 0 containers: []
	W1212 01:41:08.878353  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:41:08.878360  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:41:08.878450  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:41:08.904601  291455 cri.go:89] found id: ""
	I1212 01:41:08.904628  291455 logs.go:282] 0 containers: []
	W1212 01:41:08.904636  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:41:08.904643  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:41:08.904702  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:41:08.929638  291455 cri.go:89] found id: ""
	I1212 01:41:08.929660  291455 logs.go:282] 0 containers: []
	W1212 01:41:08.929668  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:41:08.929678  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:41:08.929689  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:41:08.987700  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:41:08.987732  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:41:09.006748  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:41:09.006844  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:41:09.074571  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:41:09.066680   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:09.067299   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:09.068802   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:09.069203   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:09.070675   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:41:09.066680   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:09.067299   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:09.068802   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:09.069203   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:09.070675   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:41:09.074595  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:41:09.074607  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:41:09.099568  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:41:09.099599  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:41:11.629539  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:41:11.640012  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:41:11.640082  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:41:11.663460  291455 cri.go:89] found id: ""
	I1212 01:41:11.663485  291455 logs.go:282] 0 containers: []
	W1212 01:41:11.663493  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:41:11.663500  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:41:11.663555  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:41:11.686956  291455 cri.go:89] found id: ""
	I1212 01:41:11.686978  291455 logs.go:282] 0 containers: []
	W1212 01:41:11.686986  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:41:11.687088  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:41:11.687150  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:41:11.712890  291455 cri.go:89] found id: ""
	I1212 01:41:11.712913  291455 logs.go:282] 0 containers: []
	W1212 01:41:11.712922  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:41:11.712928  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:41:11.712984  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:41:11.736706  291455 cri.go:89] found id: ""
	I1212 01:41:11.736728  291455 logs.go:282] 0 containers: []
	W1212 01:41:11.736736  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:41:11.736742  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:41:11.736800  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:41:11.759893  291455 cri.go:89] found id: ""
	I1212 01:41:11.759915  291455 logs.go:282] 0 containers: []
	W1212 01:41:11.759923  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:41:11.759929  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:41:11.759986  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:41:11.794524  291455 cri.go:89] found id: ""
	I1212 01:41:11.794548  291455 logs.go:282] 0 containers: []
	W1212 01:41:11.794556  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:41:11.794563  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:41:11.794617  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:41:11.837664  291455 cri.go:89] found id: ""
	I1212 01:41:11.837685  291455 logs.go:282] 0 containers: []
	W1212 01:41:11.837693  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:41:11.837699  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:41:11.837758  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:41:11.876539  291455 cri.go:89] found id: ""
	I1212 01:41:11.876560  291455 logs.go:282] 0 containers: []
	W1212 01:41:11.876568  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:41:11.876576  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:41:11.876588  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:41:11.891935  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:41:11.891958  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:41:11.953883  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:41:11.945499   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:11.946165   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:11.947829   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:11.948378   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:11.949885   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:41:11.945499   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:11.946165   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:11.947829   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:11.948378   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:11.949885   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:41:11.953906  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:41:11.953919  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:41:11.978361  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:41:11.978394  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:41:12.008436  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:41:12.008467  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:41:14.566794  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:41:14.577540  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:41:14.577620  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:41:14.603419  291455 cri.go:89] found id: ""
	I1212 01:41:14.603444  291455 logs.go:282] 0 containers: []
	W1212 01:41:14.603453  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:41:14.603459  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:41:14.603523  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:41:14.627963  291455 cri.go:89] found id: ""
	I1212 01:41:14.627986  291455 logs.go:282] 0 containers: []
	W1212 01:41:14.627994  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:41:14.628000  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:41:14.628064  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:41:14.651989  291455 cri.go:89] found id: ""
	I1212 01:41:14.652014  291455 logs.go:282] 0 containers: []
	W1212 01:41:14.652024  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:41:14.652031  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:41:14.652089  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:41:14.680771  291455 cri.go:89] found id: ""
	I1212 01:41:14.680794  291455 logs.go:282] 0 containers: []
	W1212 01:41:14.680802  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:41:14.680808  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:41:14.680865  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:41:14.705454  291455 cri.go:89] found id: ""
	I1212 01:41:14.705479  291455 logs.go:282] 0 containers: []
	W1212 01:41:14.705488  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:41:14.705494  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:41:14.705553  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:41:14.734181  291455 cri.go:89] found id: ""
	I1212 01:41:14.734207  291455 logs.go:282] 0 containers: []
	W1212 01:41:14.734216  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:41:14.734222  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:41:14.734279  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:41:14.758125  291455 cri.go:89] found id: ""
	I1212 01:41:14.758150  291455 logs.go:282] 0 containers: []
	W1212 01:41:14.758159  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:41:14.758165  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:41:14.758224  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:41:14.796212  291455 cri.go:89] found id: ""
	I1212 01:41:14.796239  291455 logs.go:282] 0 containers: []
	W1212 01:41:14.796248  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:41:14.796257  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:41:14.796268  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:41:14.875942  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:41:14.875982  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:41:14.893694  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:41:14.893723  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:41:14.958664  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:41:14.950439   13182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:14.951146   13182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:14.952867   13182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:14.953336   13182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:14.954860   13182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:41:14.950439   13182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:14.951146   13182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:14.952867   13182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:14.953336   13182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:14.954860   13182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:41:14.958686  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:41:14.958698  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:41:14.983555  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:41:14.983592  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:41:17.522313  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:41:17.532817  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:41:17.532892  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:41:17.560757  291455 cri.go:89] found id: ""
	I1212 01:41:17.560779  291455 logs.go:282] 0 containers: []
	W1212 01:41:17.560788  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:41:17.560795  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:41:17.560851  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:41:17.585702  291455 cri.go:89] found id: ""
	I1212 01:41:17.585725  291455 logs.go:282] 0 containers: []
	W1212 01:41:17.585734  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:41:17.585740  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:41:17.585807  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:41:17.614888  291455 cri.go:89] found id: ""
	I1212 01:41:17.614912  291455 logs.go:282] 0 containers: []
	W1212 01:41:17.614920  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:41:17.614926  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:41:17.614983  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:41:17.640684  291455 cri.go:89] found id: ""
	I1212 01:41:17.640706  291455 logs.go:282] 0 containers: []
	W1212 01:41:17.640714  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:41:17.640721  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:41:17.640781  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:41:17.666504  291455 cri.go:89] found id: ""
	I1212 01:41:17.666529  291455 logs.go:282] 0 containers: []
	W1212 01:41:17.666538  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:41:17.666545  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:41:17.666619  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:41:17.693636  291455 cri.go:89] found id: ""
	I1212 01:41:17.693661  291455 logs.go:282] 0 containers: []
	W1212 01:41:17.693670  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:41:17.693677  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:41:17.693738  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:41:17.718203  291455 cri.go:89] found id: ""
	I1212 01:41:17.718270  291455 logs.go:282] 0 containers: []
	W1212 01:41:17.718310  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:41:17.718337  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:41:17.718430  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:41:17.745520  291455 cri.go:89] found id: ""
	I1212 01:41:17.745544  291455 logs.go:282] 0 containers: []
	W1212 01:41:17.745553  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:41:17.745562  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:41:17.745574  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:41:17.809137  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:41:17.809237  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:41:17.824842  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:41:17.824909  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:41:17.914329  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:41:17.905491   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:17.906027   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:17.907410   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:17.907912   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:17.909473   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:41:17.905491   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:17.906027   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:17.907410   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:17.907912   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:17.909473   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:41:17.914350  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:41:17.914365  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:41:17.939510  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:41:17.939546  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:41:20.466980  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:41:20.480747  291455 out.go:203] 
	W1212 01:41:20.483558  291455 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1212 01:41:20.483596  291455 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1212 01:41:20.483610  291455 out.go:285] * Related issues:
	W1212 01:41:20.483628  291455 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1212 01:41:20.483644  291455 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1212 01:41:20.486471  291455 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.245221319Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.245292023Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.245392938Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.245465406Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.245525534Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.245588713Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.245646150Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.245704997Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.245771016Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.245854791Z" level=info msg="Connect containerd service"
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.246200073Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.246847141Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.263118416Z" level=info msg="Start subscribing containerd event"
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.263340210Z" level=info msg="Start recovering state"
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.263271673Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.264204469Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.302901466Z" level=info msg="Start event monitor"
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.302971940Z" level=info msg="Start cni network conf syncer for default"
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.302983534Z" level=info msg="Start streaming server"
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.303030213Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.303039617Z" level=info msg="runtime interface starting up..."
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.303045705Z" level=info msg="starting plugins..."
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.303228803Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.303400291Z" level=info msg="containerd successfully booted in 0.083333s"
	Dec 12 01:35:17 newest-cni-256959 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:41:23.665586   13451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:23.666522   13451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:23.668364   13451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:23.668944   13451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:23.670571   13451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec11 23:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014465] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.504479] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.038126] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.726220] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.947343] kauditd_printk_skb: 36 callbacks suppressed
	[Dec12 00:40] hrtimer: interrupt took 11339963 ns
	
	
	==> kernel <==
	 01:41:23 up  2:23,  0 user,  load average: 0.45, 0.63, 1.25
	Linux newest-cni-256959 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 01:41:20 newest-cni-256959 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:41:20 newest-cni-256959 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 484.
	Dec 12 01:41:20 newest-cni-256959 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:41:20 newest-cni-256959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:41:20 newest-cni-256959 kubelet[13330]: E1212 01:41:20.916841   13330 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:41:20 newest-cni-256959 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:41:20 newest-cni-256959 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:41:21 newest-cni-256959 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 485.
	Dec 12 01:41:21 newest-cni-256959 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:41:21 newest-cni-256959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:41:21 newest-cni-256959 kubelet[13335]: E1212 01:41:21.657200   13335 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:41:21 newest-cni-256959 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:41:21 newest-cni-256959 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:41:22 newest-cni-256959 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 486.
	Dec 12 01:41:22 newest-cni-256959 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:41:22 newest-cni-256959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:41:22 newest-cni-256959 kubelet[13355]: E1212 01:41:22.406158   13355 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:41:22 newest-cni-256959 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:41:22 newest-cni-256959 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:41:23 newest-cni-256959 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 487.
	Dec 12 01:41:23 newest-cni-256959 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:41:23 newest-cni-256959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:41:23 newest-cni-256959 kubelet[13360]: E1212 01:41:23.146722   13360 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:41:23 newest-cni-256959 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:41:23 newest-cni-256959 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-256959 -n newest-cni-256959
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-256959 -n newest-cni-256959: exit status 2 (421.586866ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-256959" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (374.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 01:40:50.110077    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/old-k8s-version-147581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 01:40:57.042410    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 01:41:18.647923    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/default-k8s-diff-port-971096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 01:42:13.178403    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/old-k8s-version-147581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 01:42:41.713708    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/default-k8s-diff-port-971096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 01:43:24.697714    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 01:43:41.615792    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 01:43:52.052173    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 01:45:15.119283    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 01:45:50.110571    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/old-k8s-version-147581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 01:45:57.042513    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 01:46:18.648232    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/default-k8s-diff-port-971096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
I1212 01:47:56.565181    4290 config.go:182] Loaded profile config "custom-flannel-341847": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 01:47:58.176250    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/auto-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:47:58.182579    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/auto-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:47:58.193885    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/auto-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:47:58.215682    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/auto-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:47:58.257039    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/auto-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:47:58.338679    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/auto-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 01:47:58.500734    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/auto-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:47:58.822690    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/auto-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 01:47:59.464311    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/auto-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 01:48:00.745761    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/auto-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 01:48:03.308031    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/auto-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 01:48:08.429576    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/auto-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 01:48:18.671792    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/auto-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-361053 -n no-preload-361053
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-361053 -n no-preload-361053: exit status 2 (445.803721ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "no-preload-361053" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-361053
helpers_test.go:244: (dbg) docker inspect no-preload-361053:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd",
	        "Created": "2025-12-12T01:22:53.604240637Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 287337,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T01:33:10.69835803Z",
	            "FinishedAt": "2025-12-12T01:33:09.357122497Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd/hostname",
	        "HostsPath": "/var/lib/docker/containers/68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd/hosts",
	        "LogPath": "/var/lib/docker/containers/68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd/68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd-json.log",
	        "Name": "/no-preload-361053",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-361053:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-361053",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd",
	                "LowerDir": "/var/lib/docker/overlay2/9169bb0f18d4483461fe8404c60d5cb61a51170a9dcf8e503c392f8c9d01abee-init/diff:/var/lib/docker/overlay2/4c0e5370e4fd7b4e6c6a79620ef377d7d55826709cd277e0cfa49c6005af0314/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9169bb0f18d4483461fe8404c60d5cb61a51170a9dcf8e503c392f8c9d01abee/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9169bb0f18d4483461fe8404c60d5cb61a51170a9dcf8e503c392f8c9d01abee/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9169bb0f18d4483461fe8404c60d5cb61a51170a9dcf8e503c392f8c9d01abee/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-361053",
	                "Source": "/var/lib/docker/volumes/no-preload-361053/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-361053",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-361053",
	                "name.minikube.sigs.k8s.io": "no-preload-361053",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "61cc494dd067263f866e7781df4148bb8c831ce7801f7a97e8775eb48f40b482",
	            "SandboxKey": "/var/run/docker/netns/61cc494dd067",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-361053": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0a:bb:a3:34:c6:7e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ee086efedb5c3900c251cd31f9316499408470e70a7d486e64d8b91c6bf60cd7",
	                    "EndpointID": "f480dff36972a9a192fc5dc57b92877bed5645512d8423e9e85ac35e1acb41cd",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-361053",
	                        "68256fe8de3b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-361053 -n no-preload-361053
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-361053 -n no-preload-361053: exit status 2 (430.525622ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-361053 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p no-preload-361053 logs -n 25: (1.039367069s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                          ARGS                                          │        PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p custom-flannel-341847 sudo cat /etc/resolv.conf                                     │ custom-flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:48 UTC │ 12 Dec 25 01:48 UTC │
	│ ssh     │ -p custom-flannel-341847 sudo crictl pods                                              │ custom-flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:48 UTC │ 12 Dec 25 01:48 UTC │
	│ ssh     │ -p custom-flannel-341847 sudo crictl ps --all                                          │ custom-flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:48 UTC │ 12 Dec 25 01:48 UTC │
	│ ssh     │ -p custom-flannel-341847 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;   │ custom-flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:48 UTC │ 12 Dec 25 01:48 UTC │
	│ ssh     │ -p custom-flannel-341847 sudo ip a s                                                   │ custom-flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:48 UTC │ 12 Dec 25 01:48 UTC │
	│ ssh     │ -p custom-flannel-341847 sudo ip r s                                                   │ custom-flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:48 UTC │ 12 Dec 25 01:48 UTC │
	│ ssh     │ -p custom-flannel-341847 sudo iptables-save                                            │ custom-flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:48 UTC │ 12 Dec 25 01:48 UTC │
	│ ssh     │ -p custom-flannel-341847 sudo iptables -t nat -L -n -v                                 │ custom-flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:48 UTC │ 12 Dec 25 01:48 UTC │
	│ ssh     │ -p custom-flannel-341847 sudo cat /run/flannel/subnet.env                              │ custom-flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:48 UTC │ 12 Dec 25 01:48 UTC │
	│ ssh     │ -p custom-flannel-341847 sudo cat /etc/kube-flannel/cni-conf.json                      │ custom-flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:48 UTC │                     │
	│ ssh     │ -p custom-flannel-341847 sudo systemctl status kubelet --all --full --no-pager         │ custom-flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:48 UTC │ 12 Dec 25 01:48 UTC │
	│ ssh     │ -p custom-flannel-341847 sudo systemctl cat kubelet --no-pager                         │ custom-flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:48 UTC │ 12 Dec 25 01:48 UTC │
	│ ssh     │ -p custom-flannel-341847 sudo journalctl -xeu kubelet --all --full --no-pager          │ custom-flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:48 UTC │ 12 Dec 25 01:48 UTC │
	│ ssh     │ -p custom-flannel-341847 sudo cat /etc/kubernetes/kubelet.conf                         │ custom-flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:48 UTC │ 12 Dec 25 01:48 UTC │
	│ ssh     │ -p custom-flannel-341847 sudo cat /var/lib/kubelet/config.yaml                         │ custom-flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:48 UTC │ 12 Dec 25 01:48 UTC │
	│ ssh     │ -p custom-flannel-341847 sudo systemctl status docker --all --full --no-pager          │ custom-flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:48 UTC │                     │
	│ ssh     │ -p custom-flannel-341847 sudo systemctl cat docker --no-pager                          │ custom-flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:48 UTC │ 12 Dec 25 01:48 UTC │
	│ ssh     │ -p custom-flannel-341847 sudo cat /etc/docker/daemon.json                              │ custom-flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:48 UTC │                     │
	│ ssh     │ -p custom-flannel-341847 sudo docker system info                                       │ custom-flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:48 UTC │                     │
	│ ssh     │ -p custom-flannel-341847 sudo systemctl status cri-docker --all --full --no-pager      │ custom-flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:48 UTC │                     │
	│ ssh     │ -p custom-flannel-341847 sudo systemctl cat cri-docker --no-pager                      │ custom-flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:48 UTC │ 12 Dec 25 01:48 UTC │
	│ ssh     │ -p custom-flannel-341847 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ custom-flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:48 UTC │                     │
	│ ssh     │ -p custom-flannel-341847 sudo cat /usr/lib/systemd/system/cri-docker.service           │ custom-flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:48 UTC │ 12 Dec 25 01:48 UTC │
	│ ssh     │ -p custom-flannel-341847 sudo cri-dockerd --version                                    │ custom-flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:48 UTC │ 12 Dec 25 01:48 UTC │
	│ ssh     │ -p custom-flannel-341847 sudo systemctl status containerd --all --full --no-pager      │ custom-flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:48 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 01:47:01
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 01:47:01.063861  331524 out.go:360] Setting OutFile to fd 1 ...
	I1212 01:47:01.064055  331524 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:47:01.064085  331524 out.go:374] Setting ErrFile to fd 2...
	I1212 01:47:01.064106  331524 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:47:01.064513  331524 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	I1212 01:47:01.065597  331524 out.go:368] Setting JSON to false
	I1212 01:47:01.066590  331524 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8967,"bootTime":1765495054,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1212 01:47:01.066696  331524 start.go:143] virtualization:  
	I1212 01:47:01.070651  331524 out.go:179] * [custom-flannel-341847] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 01:47:01.075362  331524 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 01:47:01.075436  331524 notify.go:221] Checking for updates...
	I1212 01:47:01.081991  331524 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 01:47:01.085529  331524 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 01:47:01.088671  331524 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	I1212 01:47:01.091926  331524 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 01:47:01.095313  331524 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 01:47:01.098945  331524 config.go:182] Loaded profile config "no-preload-361053": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 01:47:01.099125  331524 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 01:47:01.131631  331524 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 01:47:01.131793  331524 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:47:01.198524  331524 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 01:47:01.189128003 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:47:01.198634  331524 docker.go:319] overlay module found
	I1212 01:47:01.201985  331524 out.go:179] * Using the docker driver based on user configuration
	I1212 01:47:01.205167  331524 start.go:309] selected driver: docker
	I1212 01:47:01.205199  331524 start.go:927] validating driver "docker" against <nil>
	I1212 01:47:01.205213  331524 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 01:47:01.205926  331524 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:47:01.261192  331524 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 01:47:01.25148402 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:47:01.261351  331524 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 01:47:01.261588  331524 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 01:47:01.264620  331524 out.go:179] * Using Docker driver with root privileges
	I1212 01:47:01.267502  331524 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1212 01:47:01.267538  331524 start_flags.go:336] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1212 01:47:01.267635  331524 start.go:353] cluster config:
	{Name:custom-flannel-341847 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-341847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Sock
etVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:47:01.270937  331524 out.go:179] * Starting "custom-flannel-341847" primary control-plane node in "custom-flannel-341847" cluster
	I1212 01:47:01.273934  331524 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1212 01:47:01.277128  331524 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 01:47:01.279955  331524 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1212 01:47:01.280020  331524 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1212 01:47:01.280036  331524 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 01:47:01.280050  331524 cache.go:65] Caching tarball of preloaded images
	I1212 01:47:01.280132  331524 preload.go:238] Found /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1212 01:47:01.280142  331524 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1212 01:47:01.280249  331524 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/custom-flannel-341847/config.json ...
	I1212 01:47:01.280275  331524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/custom-flannel-341847/config.json: {Name:mkc488cb01c2f642c64330acd83f3b7624c4fe7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:47:01.308004  331524 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 01:47:01.308025  331524 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 01:47:01.308043  331524 cache.go:243] Successfully downloaded all kic artifacts
	I1212 01:47:01.308072  331524 start.go:360] acquireMachinesLock for custom-flannel-341847: {Name:mk23d1666e4afcdaa358f6080a8228a4e4e0d750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:47:01.308170  331524 start.go:364] duration metric: took 82.643µs to acquireMachinesLock for "custom-flannel-341847"
	I1212 01:47:01.308194  331524 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-341847 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-341847 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1212 01:47:01.308265  331524 start.go:125] createHost starting for "" (driver="docker")
	I1212 01:47:01.311750  331524 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1212 01:47:01.311968  331524 start.go:159] libmachine.API.Create for "custom-flannel-341847" (driver="docker")
	I1212 01:47:01.311998  331524 client.go:173] LocalClient.Create starting
	I1212 01:47:01.312071  331524 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem
	I1212 01:47:01.312107  331524 main.go:143] libmachine: Decoding PEM data...
	I1212 01:47:01.312123  331524 main.go:143] libmachine: Parsing certificate...
	I1212 01:47:01.312173  331524 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem
	I1212 01:47:01.312189  331524 main.go:143] libmachine: Decoding PEM data...
	I1212 01:47:01.312203  331524 main.go:143] libmachine: Parsing certificate...
	I1212 01:47:01.312564  331524 cli_runner.go:164] Run: docker network inspect custom-flannel-341847 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 01:47:01.335150  331524 cli_runner.go:211] docker network inspect custom-flannel-341847 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 01:47:01.335228  331524 network_create.go:284] running [docker network inspect custom-flannel-341847] to gather additional debugging logs...
	I1212 01:47:01.335244  331524 cli_runner.go:164] Run: docker network inspect custom-flannel-341847
	W1212 01:47:01.357147  331524 cli_runner.go:211] docker network inspect custom-flannel-341847 returned with exit code 1
	I1212 01:47:01.357180  331524 network_create.go:287] error running [docker network inspect custom-flannel-341847]: docker network inspect custom-flannel-341847: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network custom-flannel-341847 not found
	I1212 01:47:01.357194  331524 network_create.go:289] output of [docker network inspect custom-flannel-341847]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network custom-flannel-341847 not found
	
	** /stderr **
	I1212 01:47:01.357284  331524 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 01:47:01.374740  331524 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4cd687b06342 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:a2:e8:c8:87:d3:0a} reservation:<nil>}
	I1212 01:47:01.375204  331524 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-c02c16721c9d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3e:e7:06:63:2c:e9} reservation:<nil>}
	I1212 01:47:01.375568  331524 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-805b07ff58c0 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:be:18:35:7a:03:02} reservation:<nil>}
	I1212 01:47:01.375984  331524 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019b2d10}
	I1212 01:47:01.376002  331524 network_create.go:124] attempt to create docker network custom-flannel-341847 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1212 01:47:01.376068  331524 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-341847 custom-flannel-341847
	I1212 01:47:01.433982  331524 network_create.go:108] docker network custom-flannel-341847 192.168.76.0/24 created
	I1212 01:47:01.434027  331524 kic.go:121] calculated static IP "192.168.76.2" for the "custom-flannel-341847" container
	I1212 01:47:01.434118  331524 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 01:47:01.450584  331524 cli_runner.go:164] Run: docker volume create custom-flannel-341847 --label name.minikube.sigs.k8s.io=custom-flannel-341847 --label created_by.minikube.sigs.k8s.io=true
	I1212 01:47:01.469089  331524 oci.go:103] Successfully created a docker volume custom-flannel-341847
	I1212 01:47:01.469175  331524 cli_runner.go:164] Run: docker run --rm --name custom-flannel-341847-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-341847 --entrypoint /usr/bin/test -v custom-flannel-341847:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1212 01:47:02.013959  331524 oci.go:107] Successfully prepared a docker volume custom-flannel-341847
	I1212 01:47:02.014040  331524 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1212 01:47:02.014050  331524 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 01:47:02.014136  331524 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v custom-flannel-341847:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 01:47:06.952718  331524 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v custom-flannel-341847:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.938518741s)
	I1212 01:47:06.952754  331524 kic.go:203] duration metric: took 4.938697318s to extract preloaded images to volume ...
	W1212 01:47:06.952895  331524 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1212 01:47:06.953015  331524 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 01:47:07.008346  331524 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-flannel-341847 --name custom-flannel-341847 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-341847 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-flannel-341847 --network custom-flannel-341847 --ip 192.168.76.2 --volume custom-flannel-341847:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1212 01:47:07.322553  331524 cli_runner.go:164] Run: docker container inspect custom-flannel-341847 --format={{.State.Running}}
	I1212 01:47:07.341838  331524 cli_runner.go:164] Run: docker container inspect custom-flannel-341847 --format={{.State.Status}}
	I1212 01:47:07.382351  331524 cli_runner.go:164] Run: docker exec custom-flannel-341847 stat /var/lib/dpkg/alternatives/iptables
	I1212 01:47:07.440502  331524 oci.go:144] the created container "custom-flannel-341847" has a running status.
	I1212 01:47:07.440528  331524 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/custom-flannel-341847/id_rsa...
	I1212 01:47:07.717377  331524 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22101-2343/.minikube/machines/custom-flannel-341847/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 01:47:07.740619  331524 cli_runner.go:164] Run: docker container inspect custom-flannel-341847 --format={{.State.Status}}
	I1212 01:47:07.763540  331524 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 01:47:07.763577  331524 kic_runner.go:114] Args: [docker exec --privileged custom-flannel-341847 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 01:47:07.813006  331524 cli_runner.go:164] Run: docker container inspect custom-flannel-341847 --format={{.State.Status}}
	I1212 01:47:07.839255  331524 machine.go:94] provisionDockerMachine start ...
	I1212 01:47:07.839341  331524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-341847
	I1212 01:47:07.865065  331524 main.go:143] libmachine: Using SSH client type: native
	I1212 01:47:07.865410  331524 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1212 01:47:07.865420  331524 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 01:47:07.867046  331524 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55292->127.0.0.1:33123: read: connection reset by peer
	I1212 01:47:11.018936  331524 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-341847
	
	I1212 01:47:11.018963  331524 ubuntu.go:182] provisioning hostname "custom-flannel-341847"
	I1212 01:47:11.019046  331524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-341847
	I1212 01:47:11.038344  331524 main.go:143] libmachine: Using SSH client type: native
	I1212 01:47:11.038665  331524 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1212 01:47:11.038682  331524 main.go:143] libmachine: About to run SSH command:
	sudo hostname custom-flannel-341847 && echo "custom-flannel-341847" | sudo tee /etc/hostname
	I1212 01:47:11.200782  331524 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-341847
	
	I1212 01:47:11.200931  331524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-341847
	I1212 01:47:11.218689  331524 main.go:143] libmachine: Using SSH client type: native
	I1212 01:47:11.219053  331524 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1212 01:47:11.219076  331524 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-341847' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-341847/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-341847' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:47:11.370969  331524 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:47:11.371017  331524 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-2343/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-2343/.minikube}
	I1212 01:47:11.371042  331524 ubuntu.go:190] setting up certificates
	I1212 01:47:11.371057  331524 provision.go:84] configureAuth start
	I1212 01:47:11.371122  331524 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-341847
	I1212 01:47:11.388009  331524 provision.go:143] copyHostCerts
	I1212 01:47:11.388082  331524 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem, removing ...
	I1212 01:47:11.388094  331524 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem
	I1212 01:47:11.388181  331524 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem (1123 bytes)
	I1212 01:47:11.388269  331524 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem, removing ...
	I1212 01:47:11.388278  331524 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem
	I1212 01:47:11.388310  331524 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem (1675 bytes)
	I1212 01:47:11.388367  331524 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem, removing ...
	I1212 01:47:11.388376  331524 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem
	I1212 01:47:11.388406  331524 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem (1082 bytes)
	I1212 01:47:11.388455  331524 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-341847 san=[127.0.0.1 192.168.76.2 custom-flannel-341847 localhost minikube]
	I1212 01:47:11.622002  331524 provision.go:177] copyRemoteCerts
	I1212 01:47:11.622066  331524 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:47:11.622110  331524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-341847
	I1212 01:47:11.638826  331524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/custom-flannel-341847/id_rsa Username:docker}
	I1212 01:47:11.746427  331524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 01:47:11.765739  331524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1212 01:47:11.784196  331524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 01:47:11.804763  331524 provision.go:87] duration metric: took 433.680027ms to configureAuth
	I1212 01:47:11.804787  331524 ubuntu.go:206] setting minikube options for container-runtime
	I1212 01:47:11.804978  331524 config.go:182] Loaded profile config "custom-flannel-341847": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1212 01:47:11.804985  331524 machine.go:97] duration metric: took 3.965713789s to provisionDockerMachine
	I1212 01:47:11.804996  331524 client.go:176] duration metric: took 10.492988923s to LocalClient.Create
	I1212 01:47:11.805011  331524 start.go:167] duration metric: took 10.493044251s to libmachine.API.Create "custom-flannel-341847"
	I1212 01:47:11.805018  331524 start.go:293] postStartSetup for "custom-flannel-341847" (driver="docker")
	I1212 01:47:11.805027  331524 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:47:11.805075  331524 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:47:11.805118  331524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-341847
	I1212 01:47:11.823981  331524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/custom-flannel-341847/id_rsa Username:docker}
	I1212 01:47:11.930872  331524 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:47:11.933946  331524 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 01:47:11.933980  331524 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 01:47:11.933992  331524 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/addons for local assets ...
	I1212 01:47:11.934046  331524 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/files for local assets ...
	I1212 01:47:11.934140  331524 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem -> 42902.pem in /etc/ssl/certs
	I1212 01:47:11.934250  331524 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:47:11.941717  331524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /etc/ssl/certs/42902.pem (1708 bytes)
	I1212 01:47:11.959133  331524 start.go:296] duration metric: took 154.100328ms for postStartSetup
	I1212 01:47:11.959558  331524 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-341847
	I1212 01:47:11.976710  331524 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/custom-flannel-341847/config.json ...
	I1212 01:47:11.976985  331524 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 01:47:11.977047  331524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-341847
	I1212 01:47:11.993692  331524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/custom-flannel-341847/id_rsa Username:docker}
	I1212 01:47:12.101741  331524 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 01:47:12.106345  331524 start.go:128] duration metric: took 10.798066508s to createHost
	I1212 01:47:12.106370  331524 start.go:83] releasing machines lock for "custom-flannel-341847", held for 10.798192064s
	I1212 01:47:12.106443  331524 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-341847
	I1212 01:47:12.123302  331524 ssh_runner.go:195] Run: cat /version.json
	I1212 01:47:12.123338  331524 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:47:12.123354  331524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-341847
	I1212 01:47:12.123400  331524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-341847
	I1212 01:47:12.142506  331524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/custom-flannel-341847/id_rsa Username:docker}
	I1212 01:47:12.147397  331524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/custom-flannel-341847/id_rsa Username:docker}
	I1212 01:47:12.242775  331524 ssh_runner.go:195] Run: systemctl --version
	I1212 01:47:12.356509  331524 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:47:12.361192  331524 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:47:12.361297  331524 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:47:12.390782  331524 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1212 01:47:12.390823  331524 start.go:496] detecting cgroup driver to use...
	I1212 01:47:12.390857  331524 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 01:47:12.390912  331524 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 01:47:12.406748  331524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 01:47:12.420569  331524 docker.go:218] disabling cri-docker service (if available) ...
	I1212 01:47:12.420695  331524 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:47:12.438129  331524 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:47:12.456458  331524 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:47:12.593604  331524 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:47:12.721738  331524 docker.go:234] disabling docker service ...
	I1212 01:47:12.721805  331524 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:47:12.745576  331524 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:47:12.758556  331524 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:47:12.879497  331524 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:47:13.003112  331524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:47:13.018514  331524 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:47:13.034805  331524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 01:47:13.045410  331524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 01:47:13.055122  331524 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 01:47:13.055192  331524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 01:47:13.065074  331524 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 01:47:13.075262  331524 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 01:47:13.085191  331524 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 01:47:13.094833  331524 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:47:13.103701  331524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 01:47:13.113119  331524 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 01:47:13.122420  331524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 01:47:13.131510  331524 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:47:13.139201  331524 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:47:13.146572  331524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:47:13.261808  331524 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 01:47:13.402283  331524 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1212 01:47:13.402379  331524 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1212 01:47:13.406165  331524 start.go:564] Will wait 60s for crictl version
	I1212 01:47:13.406236  331524 ssh_runner.go:195] Run: which crictl
	I1212 01:47:13.409682  331524 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 01:47:13.435865  331524 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1212 01:47:13.435938  331524 ssh_runner.go:195] Run: containerd --version
	I1212 01:47:13.457815  331524 ssh_runner.go:195] Run: containerd --version
	I1212 01:47:13.485067  331524 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 2.2.0 ...
	I1212 01:47:13.488185  331524 cli_runner.go:164] Run: docker network inspect custom-flannel-341847 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 01:47:13.504203  331524 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1212 01:47:13.507947  331524 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:47:13.517351  331524 kubeadm.go:884] updating cluster {Name:custom-flannel-341847 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-341847 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:47:13.517456  331524 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1212 01:47:13.517527  331524 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:47:13.541654  331524 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 01:47:13.541678  331524 containerd.go:534] Images already preloaded, skipping extraction
	I1212 01:47:13.541741  331524 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:47:13.565534  331524 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 01:47:13.565555  331524 cache_images.go:86] Images are preloaded, skipping loading
	I1212 01:47:13.565563  331524 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 containerd true true} ...
	I1212 01:47:13.565659  331524 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=custom-flannel-341847 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-341847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I1212 01:47:13.565728  331524 ssh_runner.go:195] Run: sudo crictl info
	I1212 01:47:13.589853  331524 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1212 01:47:13.589947  331524 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 01:47:13.590001  331524 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-341847 NodeName:custom-flannel-341847 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 01:47:13.590145  331524 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "custom-flannel-341847"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:47:13.590238  331524 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 01:47:13.598825  331524 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 01:47:13.598931  331524 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:47:13.606441  331524 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I1212 01:47:13.618819  331524 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 01:47:13.634833  331524 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2234 bytes)
	I1212 01:47:13.647699  331524 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1212 01:47:13.651118  331524 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:47:13.660386  331524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:47:13.783807  331524 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:47:13.799418  331524 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/custom-flannel-341847 for IP: 192.168.76.2
	I1212 01:47:13.799480  331524 certs.go:195] generating shared ca certs ...
	I1212 01:47:13.799510  331524 certs.go:227] acquiring lock for ca certs: {Name:mk18ed2fce74cbc4ee01c0f71e2dbdd98ccce1cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:47:13.799678  331524 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key
	I1212 01:47:13.799746  331524 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key
	I1212 01:47:13.799768  331524 certs.go:257] generating profile certs ...
	I1212 01:47:13.799849  331524 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/custom-flannel-341847/client.key
	I1212 01:47:13.799882  331524 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/custom-flannel-341847/client.crt with IP's: []
	I1212 01:47:13.986054  331524 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/custom-flannel-341847/client.crt ...
	I1212 01:47:13.986087  331524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/custom-flannel-341847/client.crt: {Name:mkc9b206deefd775671f7a7afadab1eb73e3df3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:47:13.986314  331524 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/custom-flannel-341847/client.key ...
	I1212 01:47:13.986331  331524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/custom-flannel-341847/client.key: {Name:mkf57c41755c7d80fc8c5c4d24575c46e352434b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:47:13.986434  331524 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/custom-flannel-341847/apiserver.key.7626b29a
	I1212 01:47:13.986455  331524 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/custom-flannel-341847/apiserver.crt.7626b29a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1212 01:47:14.218317  331524 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/custom-flannel-341847/apiserver.crt.7626b29a ...
	I1212 01:47:14.218353  331524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/custom-flannel-341847/apiserver.crt.7626b29a: {Name:mk08b2b780eb1e41dac2749eb88b8c32adcd9162 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:47:14.218537  331524 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/custom-flannel-341847/apiserver.key.7626b29a ...
	I1212 01:47:14.218551  331524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/custom-flannel-341847/apiserver.key.7626b29a: {Name:mke0499afb1f8b3909a2f7ccbf4f8587c1090de4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:47:14.218636  331524 certs.go:382] copying /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/custom-flannel-341847/apiserver.crt.7626b29a -> /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/custom-flannel-341847/apiserver.crt
	I1212 01:47:14.218726  331524 certs.go:386] copying /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/custom-flannel-341847/apiserver.key.7626b29a -> /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/custom-flannel-341847/apiserver.key
	I1212 01:47:14.218787  331524 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/custom-flannel-341847/proxy-client.key
	I1212 01:47:14.218806  331524 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/custom-flannel-341847/proxy-client.crt with IP's: []
	I1212 01:47:14.318828  331524 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/custom-flannel-341847/proxy-client.crt ...
	I1212 01:47:14.318856  331524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/custom-flannel-341847/proxy-client.crt: {Name:mk7442f3c7804c10e91b068830cab0971f2a9e84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:47:14.319031  331524 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/custom-flannel-341847/proxy-client.key ...
	I1212 01:47:14.319047  331524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/custom-flannel-341847/proxy-client.key: {Name:mk51ca72d26e5da6fdab20f7b0308500b18888a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:47:14.319226  331524 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem (1338 bytes)
	W1212 01:47:14.319277  331524 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290_empty.pem, impossibly tiny 0 bytes
	I1212 01:47:14.319292  331524 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 01:47:14.319321  331524 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem (1082 bytes)
	I1212 01:47:14.319353  331524 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:47:14.319382  331524 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem (1675 bytes)
	I1212 01:47:14.319439  331524 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem (1708 bytes)
	I1212 01:47:14.320015  331524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:47:14.337930  331524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 01:47:14.357914  331524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:47:14.376679  331524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:47:14.393914  331524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/custom-flannel-341847/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1212 01:47:14.412506  331524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/custom-flannel-341847/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 01:47:14.430769  331524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/custom-flannel-341847/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:47:14.449595  331524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/custom-flannel-341847/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 01:47:14.467262  331524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:47:14.485284  331524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem --> /usr/share/ca-certificates/4290.pem (1338 bytes)
	I1212 01:47:14.502379  331524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /usr/share/ca-certificates/42902.pem (1708 bytes)
	I1212 01:47:14.519941  331524 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:47:14.536354  331524 ssh_runner.go:195] Run: openssl version
	I1212 01:47:14.545999  331524 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:47:14.553596  331524 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 01:47:14.561903  331524 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:47:14.566689  331524 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:47:14.566755  331524 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:47:14.613111  331524 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 01:47:14.620599  331524 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 01:47:14.627830  331524 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4290.pem
	I1212 01:47:14.635455  331524 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4290.pem /etc/ssl/certs/4290.pem
	I1212 01:47:14.643096  331524 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4290.pem
	I1212 01:47:14.646875  331524 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:06 /usr/share/ca-certificates/4290.pem
	I1212 01:47:14.646948  331524 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4290.pem
	I1212 01:47:14.687853  331524 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 01:47:14.695158  331524 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4290.pem /etc/ssl/certs/51391683.0
	I1212 01:47:14.702485  331524 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42902.pem
	I1212 01:47:14.709895  331524 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42902.pem /etc/ssl/certs/42902.pem
	I1212 01:47:14.717445  331524 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42902.pem
	I1212 01:47:14.721194  331524 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:06 /usr/share/ca-certificates/42902.pem
	I1212 01:47:14.721298  331524 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42902.pem
	I1212 01:47:14.761773  331524 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 01:47:14.769578  331524 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/42902.pem /etc/ssl/certs/3ec20f2e.0
	I1212 01:47:14.776950  331524 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:47:14.780765  331524 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 01:47:14.780821  331524 kubeadm.go:401] StartCluster: {Name:custom-flannel-341847 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-341847 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Dis
ableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:47:14.780899  331524 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1212 01:47:14.780968  331524 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:47:14.827476  331524 cri.go:89] found id: ""
	I1212 01:47:14.827549  331524 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:47:14.837588  331524 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:47:14.846393  331524 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 01:47:14.846458  331524 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:47:14.854479  331524 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:47:14.854509  331524 kubeadm.go:158] found existing configuration files:
	
	I1212 01:47:14.854582  331524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:47:14.862416  331524 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:47:14.862514  331524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:47:14.869745  331524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:47:14.877354  331524 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:47:14.877423  331524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:47:14.884881  331524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:47:14.892524  331524 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:47:14.892619  331524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:47:14.900074  331524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:47:14.907847  331524 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:47:14.907917  331524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:47:14.915137  331524 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 01:47:14.954508  331524 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1212 01:47:14.954566  331524 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 01:47:14.978436  331524 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 01:47:14.978695  331524 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 01:47:14.978738  331524 kubeadm.go:319] OS: Linux
	I1212 01:47:14.978785  331524 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 01:47:14.978834  331524 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 01:47:14.978882  331524 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 01:47:14.978932  331524 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 01:47:14.978981  331524 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 01:47:14.979070  331524 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 01:47:14.979133  331524 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 01:47:14.979180  331524 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 01:47:14.979227  331524 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 01:47:15.050561  331524 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:47:15.050767  331524 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:47:15.050899  331524 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:47:15.057244  331524 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:47:15.063747  331524 out.go:252]   - Generating certificates and keys ...
	I1212 01:47:15.063848  331524 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 01:47:15.063920  331524 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 01:47:15.235308  331524 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 01:47:15.883142  331524 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 01:47:16.345346  331524 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 01:47:16.589519  331524 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 01:47:17.997075  331524 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 01:47:17.997389  331524 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-341847 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1212 01:47:18.734982  331524 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 01:47:18.735149  331524 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-341847 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1212 01:47:19.610247  331524 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 01:47:19.732770  331524 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 01:47:20.237007  331524 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 01:47:20.237255  331524 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:47:20.377064  331524 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:47:21.107864  331524 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 01:47:21.357437  331524 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:47:22.184517  331524 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:47:22.340880  331524 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:47:22.340973  331524 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:47:22.343799  331524 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:47:22.347616  331524 out.go:252]   - Booting up control plane ...
	I1212 01:47:22.347738  331524 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:47:22.347835  331524 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:47:22.347916  331524 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:47:22.363343  331524 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:47:22.363457  331524 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 01:47:22.371370  331524 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 01:47:22.371756  331524 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:47:22.371973  331524 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 01:47:22.521457  331524 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 01:47:22.521584  331524 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 01:47:23.523427  331524 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002085461s
	I1212 01:47:23.528397  331524 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1212 01:47:23.529205  331524 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1212 01:47:23.529302  331524 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1212 01:47:23.529403  331524 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1212 01:47:26.804250  331524 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.274331806s
	I1212 01:47:28.259536  331524 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.729592765s
	I1212 01:47:30.536378  331524 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.004591264s
	I1212 01:47:30.574527  331524 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 01:47:30.594803  331524 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 01:47:30.609514  331524 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 01:47:30.609720  331524 kubeadm.go:319] [mark-control-plane] Marking the node custom-flannel-341847 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 01:47:30.620983  331524 kubeadm.go:319] [bootstrap-token] Using token: cl3045.14tfdhpvamclapzj
	I1212 01:47:30.623932  331524 out.go:252]   - Configuring RBAC rules ...
	I1212 01:47:30.624067  331524 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 01:47:30.630733  331524 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 01:47:30.640033  331524 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 01:47:30.644223  331524 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 01:47:30.648456  331524 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 01:47:30.652664  331524 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 01:47:30.940863  331524 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 01:47:31.392619  331524 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1212 01:47:31.943789  331524 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1212 01:47:31.944983  331524 kubeadm.go:319] 
	I1212 01:47:31.945065  331524 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1212 01:47:31.945077  331524 kubeadm.go:319] 
	I1212 01:47:31.945149  331524 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1212 01:47:31.945159  331524 kubeadm.go:319] 
	I1212 01:47:31.945182  331524 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1212 01:47:31.945246  331524 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 01:47:31.945298  331524 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 01:47:31.945310  331524 kubeadm.go:319] 
	I1212 01:47:31.945362  331524 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1212 01:47:31.945370  331524 kubeadm.go:319] 
	I1212 01:47:31.945414  331524 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 01:47:31.945422  331524 kubeadm.go:319] 
	I1212 01:47:31.945471  331524 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1212 01:47:31.945546  331524 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 01:47:31.945619  331524 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 01:47:31.945626  331524 kubeadm.go:319] 
	I1212 01:47:31.945705  331524 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 01:47:31.945782  331524 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1212 01:47:31.945790  331524 kubeadm.go:319] 
	I1212 01:47:31.945869  331524 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token cl3045.14tfdhpvamclapzj \
	I1212 01:47:31.945976  331524 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:afcc53ad074d6c1edbf934e87f29b46b63bfa667710db88570d6339eb754c50c \
	I1212 01:47:31.945999  331524 kubeadm.go:319] 	--control-plane 
	I1212 01:47:31.946005  331524 kubeadm.go:319] 
	I1212 01:47:31.946090  331524 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1212 01:47:31.946100  331524 kubeadm.go:319] 
	I1212 01:47:31.946178  331524 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token cl3045.14tfdhpvamclapzj \
	I1212 01:47:31.946278  331524 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:afcc53ad074d6c1edbf934e87f29b46b63bfa667710db88570d6339eb754c50c 
	I1212 01:47:31.951272  331524 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1212 01:47:31.951503  331524 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 01:47:31.951614  331524 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:47:31.951637  331524 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1212 01:47:31.954575  331524 out.go:179] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	I1212 01:47:31.957560  331524 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1212 01:47:31.957641  331524 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I1212 01:47:31.961635  331524 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I1212 01:47:31.961673  331524 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4578 bytes)
	I1212 01:47:31.982372  331524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 01:47:32.521128  331524 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 01:47:32.521250  331524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:47:32.521321  331524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-341847 minikube.k8s.io/updated_at=2025_12_12T01_47_32_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0 minikube.k8s.io/name=custom-flannel-341847 minikube.k8s.io/primary=true
	I1212 01:47:32.537854  331524 ops.go:34] apiserver oom_adj: -16
	I1212 01:47:32.667633  331524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:47:33.168165  331524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:47:33.668297  331524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:47:34.168667  331524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:47:34.668505  331524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:47:35.167753  331524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:47:35.667727  331524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:47:36.168062  331524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:47:36.346952  331524 kubeadm.go:1114] duration metric: took 3.825745s to wait for elevateKubeSystemPrivileges
	I1212 01:47:36.346978  331524 kubeadm.go:403] duration metric: took 21.566164779s to StartCluster
	I1212 01:47:36.347011  331524 settings.go:142] acquiring lock: {Name:mk6dd4250df69aeba4752e9f33aeef37272375c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:47:36.347075  331524 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 01:47:36.347996  331524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/kubeconfig: {Name:mk3e2cbffa089f0a26351f5f6a49b8ed3b66edd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:47:36.348194  331524 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1212 01:47:36.348280  331524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 01:47:36.348511  331524 config.go:182] Loaded profile config "custom-flannel-341847": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1212 01:47:36.348547  331524 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 01:47:36.348648  331524 addons.go:70] Setting storage-provisioner=true in profile "custom-flannel-341847"
	I1212 01:47:36.348662  331524 addons.go:239] Setting addon storage-provisioner=true in "custom-flannel-341847"
	I1212 01:47:36.348683  331524 host.go:66] Checking if "custom-flannel-341847" exists ...
	I1212 01:47:36.349350  331524 cli_runner.go:164] Run: docker container inspect custom-flannel-341847 --format={{.State.Status}}
	I1212 01:47:36.349541  331524 addons.go:70] Setting default-storageclass=true in profile "custom-flannel-341847"
	I1212 01:47:36.349582  331524 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-341847"
	I1212 01:47:36.349909  331524 cli_runner.go:164] Run: docker container inspect custom-flannel-341847 --format={{.State.Status}}
	I1212 01:47:36.352616  331524 out.go:179] * Verifying Kubernetes components...
	I1212 01:47:36.364427  331524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:47:36.384388  331524 addons.go:239] Setting addon default-storageclass=true in "custom-flannel-341847"
	I1212 01:47:36.384434  331524 host.go:66] Checking if "custom-flannel-341847" exists ...
	I1212 01:47:36.384876  331524 cli_runner.go:164] Run: docker container inspect custom-flannel-341847 --format={{.State.Status}}
	I1212 01:47:36.398121  331524 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:47:36.402109  331524 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:47:36.402139  331524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 01:47:36.402201  331524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-341847
	I1212 01:47:36.435022  331524 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 01:47:36.435043  331524 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 01:47:36.435106  331524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-341847
	I1212 01:47:36.436764  331524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/custom-flannel-341847/id_rsa Username:docker}
	I1212 01:47:36.470087  331524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/custom-flannel-341847/id_rsa Username:docker}
	I1212 01:47:36.752865  331524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 01:47:36.756926  331524 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:47:36.775425  331524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:47:36.810149  331524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:47:37.371464  331524 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1212 01:47:37.373516  331524 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-341847" to be "Ready" ...
	I1212 01:47:37.628118  331524 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1212 01:47:37.631201  331524 addons.go:530] duration metric: took 1.282639512s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1212 01:47:37.877544  331524 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-341847" context rescaled to 1 replicas
	W1212 01:47:39.377525  331524 node_ready.go:57] node "custom-flannel-341847" has "Ready":"False" status (will retry)
	I1212 01:47:40.379195  331524 node_ready.go:49] node "custom-flannel-341847" is "Ready"
	I1212 01:47:40.379292  331524 node_ready.go:38] duration metric: took 3.005731715s for node "custom-flannel-341847" to be "Ready" ...
	I1212 01:47:40.379316  331524 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:47:40.379434  331524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:47:40.392337  331524 api_server.go:72] duration metric: took 4.044115695s to wait for apiserver process to appear ...
	I1212 01:47:40.392364  331524 api_server.go:88] waiting for apiserver healthz status ...
	I1212 01:47:40.392383  331524 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 01:47:40.401093  331524 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1212 01:47:40.402176  331524 api_server.go:141] control plane version: v1.34.2
	I1212 01:47:40.402199  331524 api_server.go:131] duration metric: took 9.828196ms to wait for apiserver health ...
	I1212 01:47:40.402208  331524 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 01:47:40.405204  331524 system_pods.go:59] 7 kube-system pods found
	I1212 01:47:40.405240  331524 system_pods.go:61] "coredns-66bc5c9577-444lp" [11c44c24-18ed-4b2e-8826-78b69c04e819] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:47:40.405247  331524 system_pods.go:61] "etcd-custom-flannel-341847" [dd658e95-8a02-4ae1-9459-57c6c35ebc52] Running
	I1212 01:47:40.405255  331524 system_pods.go:61] "kube-apiserver-custom-flannel-341847" [cbd627ec-264d-4ed8-b07d-ba8d634a9a57] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 01:47:40.405259  331524 system_pods.go:61] "kube-controller-manager-custom-flannel-341847" [f7f5a132-ba34-4d87-9851-4e548ab133a5] Running
	I1212 01:47:40.405264  331524 system_pods.go:61] "kube-proxy-bl5d5" [8e07cd41-1b51-457b-be3a-bc84feea5569] Running
	I1212 01:47:40.405272  331524 system_pods.go:61] "kube-scheduler-custom-flannel-341847" [7beffe96-8799-43ad-b922-83937f48c881] Running
	I1212 01:47:40.405282  331524 system_pods.go:61] "storage-provisioner" [fb7a6771-9120-4eb1-b62c-f44536138724] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 01:47:40.405291  331524 system_pods.go:74] duration metric: took 3.077614ms to wait for pod list to return data ...
	I1212 01:47:40.405304  331524 default_sa.go:34] waiting for default service account to be created ...
	I1212 01:47:40.407953  331524 default_sa.go:45] found service account: "default"
	I1212 01:47:40.407980  331524 default_sa.go:55] duration metric: took 2.669488ms for default service account to be created ...
	I1212 01:47:40.407990  331524 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 01:47:40.411029  331524 system_pods.go:86] 7 kube-system pods found
	I1212 01:47:40.411064  331524 system_pods.go:89] "coredns-66bc5c9577-444lp" [11c44c24-18ed-4b2e-8826-78b69c04e819] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:47:40.411072  331524 system_pods.go:89] "etcd-custom-flannel-341847" [dd658e95-8a02-4ae1-9459-57c6c35ebc52] Running
	I1212 01:47:40.411080  331524 system_pods.go:89] "kube-apiserver-custom-flannel-341847" [cbd627ec-264d-4ed8-b07d-ba8d634a9a57] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 01:47:40.411086  331524 system_pods.go:89] "kube-controller-manager-custom-flannel-341847" [f7f5a132-ba34-4d87-9851-4e548ab133a5] Running
	I1212 01:47:40.411090  331524 system_pods.go:89] "kube-proxy-bl5d5" [8e07cd41-1b51-457b-be3a-bc84feea5569] Running
	I1212 01:47:40.411094  331524 system_pods.go:89] "kube-scheduler-custom-flannel-341847" [7beffe96-8799-43ad-b922-83937f48c881] Running
	I1212 01:47:40.411100  331524 system_pods.go:89] "storage-provisioner" [fb7a6771-9120-4eb1-b62c-f44536138724] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 01:47:40.411124  331524 retry.go:31] will retry after 260.91178ms: missing components: kube-dns
	I1212 01:47:40.679930  331524 system_pods.go:86] 7 kube-system pods found
	I1212 01:47:40.679972  331524 system_pods.go:89] "coredns-66bc5c9577-444lp" [11c44c24-18ed-4b2e-8826-78b69c04e819] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:47:40.679980  331524 system_pods.go:89] "etcd-custom-flannel-341847" [dd658e95-8a02-4ae1-9459-57c6c35ebc52] Running
	I1212 01:47:40.679991  331524 system_pods.go:89] "kube-apiserver-custom-flannel-341847" [cbd627ec-264d-4ed8-b07d-ba8d634a9a57] Running
	I1212 01:47:40.679996  331524 system_pods.go:89] "kube-controller-manager-custom-flannel-341847" [f7f5a132-ba34-4d87-9851-4e548ab133a5] Running
	I1212 01:47:40.680004  331524 system_pods.go:89] "kube-proxy-bl5d5" [8e07cd41-1b51-457b-be3a-bc84feea5569] Running
	I1212 01:47:40.680009  331524 system_pods.go:89] "kube-scheduler-custom-flannel-341847" [7beffe96-8799-43ad-b922-83937f48c881] Running
	I1212 01:47:40.680018  331524 system_pods.go:89] "storage-provisioner" [fb7a6771-9120-4eb1-b62c-f44536138724] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 01:47:40.680032  331524 retry.go:31] will retry after 319.052169ms: missing components: kube-dns
	I1212 01:47:41.004576  331524 system_pods.go:86] 7 kube-system pods found
	I1212 01:47:41.004606  331524 system_pods.go:89] "coredns-66bc5c9577-444lp" [11c44c24-18ed-4b2e-8826-78b69c04e819] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:47:41.004613  331524 system_pods.go:89] "etcd-custom-flannel-341847" [dd658e95-8a02-4ae1-9459-57c6c35ebc52] Running
	I1212 01:47:41.004619  331524 system_pods.go:89] "kube-apiserver-custom-flannel-341847" [cbd627ec-264d-4ed8-b07d-ba8d634a9a57] Running
	I1212 01:47:41.004624  331524 system_pods.go:89] "kube-controller-manager-custom-flannel-341847" [f7f5a132-ba34-4d87-9851-4e548ab133a5] Running
	I1212 01:47:41.004628  331524 system_pods.go:89] "kube-proxy-bl5d5" [8e07cd41-1b51-457b-be3a-bc84feea5569] Running
	I1212 01:47:41.004632  331524 system_pods.go:89] "kube-scheduler-custom-flannel-341847" [7beffe96-8799-43ad-b922-83937f48c881] Running
	I1212 01:47:41.004638  331524 system_pods.go:89] "storage-provisioner" [fb7a6771-9120-4eb1-b62c-f44536138724] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 01:47:41.004652  331524 retry.go:31] will retry after 373.556405ms: missing components: kube-dns
	I1212 01:47:41.390682  331524 system_pods.go:86] 7 kube-system pods found
	I1212 01:47:41.390771  331524 system_pods.go:89] "coredns-66bc5c9577-444lp" [11c44c24-18ed-4b2e-8826-78b69c04e819] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:47:41.390798  331524 system_pods.go:89] "etcd-custom-flannel-341847" [dd658e95-8a02-4ae1-9459-57c6c35ebc52] Running
	I1212 01:47:41.390855  331524 system_pods.go:89] "kube-apiserver-custom-flannel-341847" [cbd627ec-264d-4ed8-b07d-ba8d634a9a57] Running
	I1212 01:47:41.390892  331524 system_pods.go:89] "kube-controller-manager-custom-flannel-341847" [f7f5a132-ba34-4d87-9851-4e548ab133a5] Running
	I1212 01:47:41.390917  331524 system_pods.go:89] "kube-proxy-bl5d5" [8e07cd41-1b51-457b-be3a-bc84feea5569] Running
	I1212 01:47:41.390948  331524 system_pods.go:89] "kube-scheduler-custom-flannel-341847" [7beffe96-8799-43ad-b922-83937f48c881] Running
	I1212 01:47:41.390980  331524 system_pods.go:89] "storage-provisioner" [fb7a6771-9120-4eb1-b62c-f44536138724] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 01:47:41.391057  331524 retry.go:31] will retry after 457.300852ms: missing components: kube-dns
	I1212 01:47:41.867745  331524 system_pods.go:86] 7 kube-system pods found
	I1212 01:47:41.867834  331524 system_pods.go:89] "coredns-66bc5c9577-444lp" [11c44c24-18ed-4b2e-8826-78b69c04e819] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:47:41.867865  331524 system_pods.go:89] "etcd-custom-flannel-341847" [dd658e95-8a02-4ae1-9459-57c6c35ebc52] Running
	I1212 01:47:41.867883  331524 system_pods.go:89] "kube-apiserver-custom-flannel-341847" [cbd627ec-264d-4ed8-b07d-ba8d634a9a57] Running
	I1212 01:47:41.867894  331524 system_pods.go:89] "kube-controller-manager-custom-flannel-341847" [f7f5a132-ba34-4d87-9851-4e548ab133a5] Running
	I1212 01:47:41.867900  331524 system_pods.go:89] "kube-proxy-bl5d5" [8e07cd41-1b51-457b-be3a-bc84feea5569] Running
	I1212 01:47:41.867904  331524 system_pods.go:89] "kube-scheduler-custom-flannel-341847" [7beffe96-8799-43ad-b922-83937f48c881] Running
	I1212 01:47:41.867908  331524 system_pods.go:89] "storage-provisioner" [fb7a6771-9120-4eb1-b62c-f44536138724] Running
	I1212 01:47:41.867947  331524 retry.go:31] will retry after 759.520058ms: missing components: kube-dns
	I1212 01:47:42.631440  331524 system_pods.go:86] 7 kube-system pods found
	I1212 01:47:42.631472  331524 system_pods.go:89] "coredns-66bc5c9577-444lp" [11c44c24-18ed-4b2e-8826-78b69c04e819] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:47:42.631479  331524 system_pods.go:89] "etcd-custom-flannel-341847" [dd658e95-8a02-4ae1-9459-57c6c35ebc52] Running
	I1212 01:47:42.631486  331524 system_pods.go:89] "kube-apiserver-custom-flannel-341847" [cbd627ec-264d-4ed8-b07d-ba8d634a9a57] Running
	I1212 01:47:42.631491  331524 system_pods.go:89] "kube-controller-manager-custom-flannel-341847" [f7f5a132-ba34-4d87-9851-4e548ab133a5] Running
	I1212 01:47:42.631496  331524 system_pods.go:89] "kube-proxy-bl5d5" [8e07cd41-1b51-457b-be3a-bc84feea5569] Running
	I1212 01:47:42.631500  331524 system_pods.go:89] "kube-scheduler-custom-flannel-341847" [7beffe96-8799-43ad-b922-83937f48c881] Running
	I1212 01:47:42.631508  331524 system_pods.go:89] "storage-provisioner" [fb7a6771-9120-4eb1-b62c-f44536138724] Running
	I1212 01:47:42.631523  331524 retry.go:31] will retry after 818.424977ms: missing components: kube-dns
	I1212 01:47:43.454008  331524 system_pods.go:86] 7 kube-system pods found
	I1212 01:47:43.454039  331524 system_pods.go:89] "coredns-66bc5c9577-444lp" [11c44c24-18ed-4b2e-8826-78b69c04e819] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:47:43.454047  331524 system_pods.go:89] "etcd-custom-flannel-341847" [dd658e95-8a02-4ae1-9459-57c6c35ebc52] Running
	I1212 01:47:43.454082  331524 system_pods.go:89] "kube-apiserver-custom-flannel-341847" [cbd627ec-264d-4ed8-b07d-ba8d634a9a57] Running
	I1212 01:47:43.454087  331524 system_pods.go:89] "kube-controller-manager-custom-flannel-341847" [f7f5a132-ba34-4d87-9851-4e548ab133a5] Running
	I1212 01:47:43.454093  331524 system_pods.go:89] "kube-proxy-bl5d5" [8e07cd41-1b51-457b-be3a-bc84feea5569] Running
	I1212 01:47:43.454101  331524 system_pods.go:89] "kube-scheduler-custom-flannel-341847" [7beffe96-8799-43ad-b922-83937f48c881] Running
	I1212 01:47:43.454105  331524 system_pods.go:89] "storage-provisioner" [fb7a6771-9120-4eb1-b62c-f44536138724] Running
	I1212 01:47:43.454119  331524 retry.go:31] will retry after 1.037944683s: missing components: kube-dns
	I1212 01:47:44.495716  331524 system_pods.go:86] 7 kube-system pods found
	I1212 01:47:44.495752  331524 system_pods.go:89] "coredns-66bc5c9577-444lp" [11c44c24-18ed-4b2e-8826-78b69c04e819] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:47:44.495759  331524 system_pods.go:89] "etcd-custom-flannel-341847" [dd658e95-8a02-4ae1-9459-57c6c35ebc52] Running
	I1212 01:47:44.495766  331524 system_pods.go:89] "kube-apiserver-custom-flannel-341847" [cbd627ec-264d-4ed8-b07d-ba8d634a9a57] Running
	I1212 01:47:44.495771  331524 system_pods.go:89] "kube-controller-manager-custom-flannel-341847" [f7f5a132-ba34-4d87-9851-4e548ab133a5] Running
	I1212 01:47:44.495776  331524 system_pods.go:89] "kube-proxy-bl5d5" [8e07cd41-1b51-457b-be3a-bc84feea5569] Running
	I1212 01:47:44.495780  331524 system_pods.go:89] "kube-scheduler-custom-flannel-341847" [7beffe96-8799-43ad-b922-83937f48c881] Running
	I1212 01:47:44.495785  331524 system_pods.go:89] "storage-provisioner" [fb7a6771-9120-4eb1-b62c-f44536138724] Running
	I1212 01:47:44.495800  331524 retry.go:31] will retry after 1.021292101s: missing components: kube-dns
	I1212 01:47:45.520894  331524 system_pods.go:86] 7 kube-system pods found
	I1212 01:47:45.520930  331524 system_pods.go:89] "coredns-66bc5c9577-444lp" [11c44c24-18ed-4b2e-8826-78b69c04e819] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:47:45.520937  331524 system_pods.go:89] "etcd-custom-flannel-341847" [dd658e95-8a02-4ae1-9459-57c6c35ebc52] Running
	I1212 01:47:45.520944  331524 system_pods.go:89] "kube-apiserver-custom-flannel-341847" [cbd627ec-264d-4ed8-b07d-ba8d634a9a57] Running
	I1212 01:47:45.520950  331524 system_pods.go:89] "kube-controller-manager-custom-flannel-341847" [f7f5a132-ba34-4d87-9851-4e548ab133a5] Running
	I1212 01:47:45.520955  331524 system_pods.go:89] "kube-proxy-bl5d5" [8e07cd41-1b51-457b-be3a-bc84feea5569] Running
	I1212 01:47:45.520962  331524 system_pods.go:89] "kube-scheduler-custom-flannel-341847" [7beffe96-8799-43ad-b922-83937f48c881] Running
	I1212 01:47:45.520966  331524 system_pods.go:89] "storage-provisioner" [fb7a6771-9120-4eb1-b62c-f44536138724] Running
	I1212 01:47:45.520981  331524 retry.go:31] will retry after 1.188668594s: missing components: kube-dns
	I1212 01:47:46.713678  331524 system_pods.go:86] 7 kube-system pods found
	I1212 01:47:46.713717  331524 system_pods.go:89] "coredns-66bc5c9577-444lp" [11c44c24-18ed-4b2e-8826-78b69c04e819] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:47:46.713724  331524 system_pods.go:89] "etcd-custom-flannel-341847" [dd658e95-8a02-4ae1-9459-57c6c35ebc52] Running
	I1212 01:47:46.713731  331524 system_pods.go:89] "kube-apiserver-custom-flannel-341847" [cbd627ec-264d-4ed8-b07d-ba8d634a9a57] Running
	I1212 01:47:46.713736  331524 system_pods.go:89] "kube-controller-manager-custom-flannel-341847" [f7f5a132-ba34-4d87-9851-4e548ab133a5] Running
	I1212 01:47:46.713740  331524 system_pods.go:89] "kube-proxy-bl5d5" [8e07cd41-1b51-457b-be3a-bc84feea5569] Running
	I1212 01:47:46.713744  331524 system_pods.go:89] "kube-scheduler-custom-flannel-341847" [7beffe96-8799-43ad-b922-83937f48c881] Running
	I1212 01:47:46.713748  331524 system_pods.go:89] "storage-provisioner" [fb7a6771-9120-4eb1-b62c-f44536138724] Running
	I1212 01:47:46.713762  331524 retry.go:31] will retry after 2.118918602s: missing components: kube-dns
	I1212 01:47:48.837339  331524 system_pods.go:86] 7 kube-system pods found
	I1212 01:47:48.837384  331524 system_pods.go:89] "coredns-66bc5c9577-444lp" [11c44c24-18ed-4b2e-8826-78b69c04e819] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:47:48.837391  331524 system_pods.go:89] "etcd-custom-flannel-341847" [dd658e95-8a02-4ae1-9459-57c6c35ebc52] Running
	I1212 01:47:48.837405  331524 system_pods.go:89] "kube-apiserver-custom-flannel-341847" [cbd627ec-264d-4ed8-b07d-ba8d634a9a57] Running
	I1212 01:47:48.837414  331524 system_pods.go:89] "kube-controller-manager-custom-flannel-341847" [f7f5a132-ba34-4d87-9851-4e548ab133a5] Running
	I1212 01:47:48.837418  331524 system_pods.go:89] "kube-proxy-bl5d5" [8e07cd41-1b51-457b-be3a-bc84feea5569] Running
	I1212 01:47:48.837421  331524 system_pods.go:89] "kube-scheduler-custom-flannel-341847" [7beffe96-8799-43ad-b922-83937f48c881] Running
	I1212 01:47:48.837425  331524 system_pods.go:89] "storage-provisioner" [fb7a6771-9120-4eb1-b62c-f44536138724] Running
	I1212 01:47:48.837439  331524 retry.go:31] will retry after 2.466308496s: missing components: kube-dns
	I1212 01:47:51.307959  331524 system_pods.go:86] 7 kube-system pods found
	I1212 01:47:51.307995  331524 system_pods.go:89] "coredns-66bc5c9577-444lp" [11c44c24-18ed-4b2e-8826-78b69c04e819] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:47:51.308001  331524 system_pods.go:89] "etcd-custom-flannel-341847" [dd658e95-8a02-4ae1-9459-57c6c35ebc52] Running
	I1212 01:47:51.308012  331524 system_pods.go:89] "kube-apiserver-custom-flannel-341847" [cbd627ec-264d-4ed8-b07d-ba8d634a9a57] Running
	I1212 01:47:51.308017  331524 system_pods.go:89] "kube-controller-manager-custom-flannel-341847" [f7f5a132-ba34-4d87-9851-4e548ab133a5] Running
	I1212 01:47:51.308020  331524 system_pods.go:89] "kube-proxy-bl5d5" [8e07cd41-1b51-457b-be3a-bc84feea5569] Running
	I1212 01:47:51.308024  331524 system_pods.go:89] "kube-scheduler-custom-flannel-341847" [7beffe96-8799-43ad-b922-83937f48c881] Running
	I1212 01:47:51.308031  331524 system_pods.go:89] "storage-provisioner" [fb7a6771-9120-4eb1-b62c-f44536138724] Running
	I1212 01:47:51.308045  331524 retry.go:31] will retry after 2.843601802s: missing components: kube-dns
	I1212 01:47:54.155953  331524 system_pods.go:86] 7 kube-system pods found
	I1212 01:47:54.155990  331524 system_pods.go:89] "coredns-66bc5c9577-444lp" [11c44c24-18ed-4b2e-8826-78b69c04e819] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:47:54.155999  331524 system_pods.go:89] "etcd-custom-flannel-341847" [dd658e95-8a02-4ae1-9459-57c6c35ebc52] Running
	I1212 01:47:54.156005  331524 system_pods.go:89] "kube-apiserver-custom-flannel-341847" [cbd627ec-264d-4ed8-b07d-ba8d634a9a57] Running
	I1212 01:47:54.156009  331524 system_pods.go:89] "kube-controller-manager-custom-flannel-341847" [f7f5a132-ba34-4d87-9851-4e548ab133a5] Running
	I1212 01:47:54.156013  331524 system_pods.go:89] "kube-proxy-bl5d5" [8e07cd41-1b51-457b-be3a-bc84feea5569] Running
	I1212 01:47:54.156017  331524 system_pods.go:89] "kube-scheduler-custom-flannel-341847" [7beffe96-8799-43ad-b922-83937f48c881] Running
	I1212 01:47:54.156021  331524 system_pods.go:89] "storage-provisioner" [fb7a6771-9120-4eb1-b62c-f44536138724] Running
	I1212 01:47:54.156029  331524 system_pods.go:126] duration metric: took 13.748032884s to wait for k8s-apps to be running ...
	I1212 01:47:54.156041  331524 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 01:47:54.156098  331524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:47:54.169177  331524 system_svc.go:56] duration metric: took 13.128468ms WaitForService to wait for kubelet
	I1212 01:47:54.169205  331524 kubeadm.go:587] duration metric: took 17.820989519s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 01:47:54.169224  331524 node_conditions.go:102] verifying NodePressure condition ...
	I1212 01:47:54.172156  331524 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 01:47:54.172188  331524 node_conditions.go:123] node cpu capacity is 2
	I1212 01:47:54.172201  331524 node_conditions.go:105] duration metric: took 2.972571ms to run NodePressure ...
	I1212 01:47:54.172213  331524 start.go:242] waiting for startup goroutines ...
	I1212 01:47:54.172221  331524 start.go:247] waiting for cluster config update ...
	I1212 01:47:54.172232  331524 start.go:256] writing updated cluster config ...
	I1212 01:47:54.172514  331524 ssh_runner.go:195] Run: rm -f paused
	I1212 01:47:54.176339  331524 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 01:47:54.180134  331524 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-444lp" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:47:54.686355  331524 pod_ready.go:94] pod "coredns-66bc5c9577-444lp" is "Ready"
	I1212 01:47:54.686385  331524 pod_ready.go:86] duration metric: took 506.218725ms for pod "coredns-66bc5c9577-444lp" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:47:54.689455  331524 pod_ready.go:83] waiting for pod "etcd-custom-flannel-341847" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:47:54.694194  331524 pod_ready.go:94] pod "etcd-custom-flannel-341847" is "Ready"
	I1212 01:47:54.694222  331524 pod_ready.go:86] duration metric: took 4.741235ms for pod "etcd-custom-flannel-341847" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:47:54.696568  331524 pod_ready.go:83] waiting for pod "kube-apiserver-custom-flannel-341847" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:47:54.701107  331524 pod_ready.go:94] pod "kube-apiserver-custom-flannel-341847" is "Ready"
	I1212 01:47:54.701131  331524 pod_ready.go:86] duration metric: took 4.533381ms for pod "kube-apiserver-custom-flannel-341847" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:47:54.703863  331524 pod_ready.go:83] waiting for pod "kube-controller-manager-custom-flannel-341847" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:47:54.979961  331524 pod_ready.go:94] pod "kube-controller-manager-custom-flannel-341847" is "Ready"
	I1212 01:47:54.979991  331524 pod_ready.go:86] duration metric: took 276.098528ms for pod "kube-controller-manager-custom-flannel-341847" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:47:55.180579  331524 pod_ready.go:83] waiting for pod "kube-proxy-bl5d5" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:47:55.580507  331524 pod_ready.go:94] pod "kube-proxy-bl5d5" is "Ready"
	I1212 01:47:55.580534  331524 pod_ready.go:86] duration metric: took 399.928804ms for pod "kube-proxy-bl5d5" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:47:55.780410  331524 pod_ready.go:83] waiting for pod "kube-scheduler-custom-flannel-341847" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:47:56.180286  331524 pod_ready.go:94] pod "kube-scheduler-custom-flannel-341847" is "Ready"
	I1212 01:47:56.180365  331524 pod_ready.go:86] duration metric: took 399.887869ms for pod "kube-scheduler-custom-flannel-341847" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 01:47:56.180386  331524 pod_ready.go:40] duration metric: took 2.004012583s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 01:47:56.230546  331524 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1212 01:47:56.233786  331524 out.go:179] * Done! kubectl is now configured to use "custom-flannel-341847" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.451347340Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.451362208Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.451390508Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.451404473Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.451413802Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.451425445Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.451435169Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.451453918Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.451470123Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.451502804Z" level=info msg="Connect containerd service"
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.451753785Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.452300474Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.470473080Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.470539313Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.470573570Z" level=info msg="Start subscribing containerd event"
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.470624376Z" level=info msg="Start recovering state"
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.499034921Z" level=info msg="Start event monitor"
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.499222886Z" level=info msg="Start cni network conf syncer for default"
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.499311773Z" level=info msg="Start streaming server"
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.499396130Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.499649310Z" level=info msg="runtime interface starting up..."
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.499722058Z" level=info msg="starting plugins..."
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.499802846Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 01:33:16 no-preload-361053 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.501821533Z" level=info msg="containerd successfully booted in 0.072171s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:48:23.226803    8160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:48:23.227376    8160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:48:23.228915    8160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:48:23.229484    8160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:48:23.231165    8160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec11 23:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014465] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.504479] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.038126] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.726220] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.947343] kauditd_printk_skb: 36 callbacks suppressed
	[Dec12 00:40] hrtimer: interrupt took 11339963 ns
	
	
	==> kernel <==
	 01:48:23 up  2:30,  0 user,  load average: 2.87, 1.94, 1.60
	Linux no-preload-361053 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 01:48:20 no-preload-361053 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:48:20 no-preload-361053 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1205.
	Dec 12 01:48:20 no-preload-361053 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:48:20 no-preload-361053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:48:20 no-preload-361053 kubelet[8025]: E1212 01:48:20.848764    8025 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:48:20 no-preload-361053 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:48:20 no-preload-361053 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:48:21 no-preload-361053 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1206.
	Dec 12 01:48:21 no-preload-361053 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:48:21 no-preload-361053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:48:21 no-preload-361053 kubelet[8032]: E1212 01:48:21.643693    8032 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:48:21 no-preload-361053 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:48:21 no-preload-361053 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:48:22 no-preload-361053 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1207.
	Dec 12 01:48:22 no-preload-361053 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:48:22 no-preload-361053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:48:22 no-preload-361053 kubelet[8067]: E1212 01:48:22.379955    8067 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:48:22 no-preload-361053 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:48:22 no-preload-361053 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:48:23 no-preload-361053 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1208.
	Dec 12 01:48:23 no-preload-361053 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:48:23 no-preload-361053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:48:23 no-preload-361053 kubelet[8146]: E1212 01:48:23.137806    8146 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:48:23 no-preload-361053 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:48:23 no-preload-361053 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-361053 -n no-preload-361053
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-361053 -n no-preload-361053: exit status 2 (506.841683ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-361053" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (9.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-256959 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-256959 -n newest-cni-256959
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-256959 -n newest-cni-256959: exit status 2 (326.841851ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-256959 -n newest-cni-256959
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-256959 -n newest-cni-256959: exit status 2 (293.88855ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-256959 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-256959 -n newest-cni-256959
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-256959 -n newest-cni-256959: exit status 2 (345.593323ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause apiserver status = "Stopped"; want = "Running"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-256959 -n newest-cni-256959
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-256959 -n newest-cni-256959: exit status 2 (310.538172ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause kubelet status = "Stopped"; want = "Running"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-256959
helpers_test.go:244: (dbg) docker inspect newest-cni-256959:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "361f9c16c44aa15a36ece6d69f387f89ceb140e0cb337e6926bbb4b89286930b",
	        "Created": "2025-12-12T01:25:15.433462291Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 291584,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T01:35:11.599618298Z",
	            "FinishedAt": "2025-12-12T01:35:10.241180563Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/361f9c16c44aa15a36ece6d69f387f89ceb140e0cb337e6926bbb4b89286930b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/361f9c16c44aa15a36ece6d69f387f89ceb140e0cb337e6926bbb4b89286930b/hostname",
	        "HostsPath": "/var/lib/docker/containers/361f9c16c44aa15a36ece6d69f387f89ceb140e0cb337e6926bbb4b89286930b/hosts",
	        "LogPath": "/var/lib/docker/containers/361f9c16c44aa15a36ece6d69f387f89ceb140e0cb337e6926bbb4b89286930b/361f9c16c44aa15a36ece6d69f387f89ceb140e0cb337e6926bbb4b89286930b-json.log",
	        "Name": "/newest-cni-256959",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-256959:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-256959",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "361f9c16c44aa15a36ece6d69f387f89ceb140e0cb337e6926bbb4b89286930b",
	                "LowerDir": "/var/lib/docker/overlay2/46aaa938e663ba5fb2a5ebb73f99ff9b7f70d7fe540cd86b216b975394456017-init/diff:/var/lib/docker/overlay2/4c0e5370e4fd7b4e6c6a79620ef377d7d55826709cd277e0cfa49c6005af0314/diff",
	                "MergedDir": "/var/lib/docker/overlay2/46aaa938e663ba5fb2a5ebb73f99ff9b7f70d7fe540cd86b216b975394456017/merged",
	                "UpperDir": "/var/lib/docker/overlay2/46aaa938e663ba5fb2a5ebb73f99ff9b7f70d7fe540cd86b216b975394456017/diff",
	                "WorkDir": "/var/lib/docker/overlay2/46aaa938e663ba5fb2a5ebb73f99ff9b7f70d7fe540cd86b216b975394456017/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-256959",
	                "Source": "/var/lib/docker/volumes/newest-cni-256959/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-256959",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-256959",
	                "name.minikube.sigs.k8s.io": "newest-cni-256959",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "345adc76212ae94224c61dd049e472f16ee67ee027a331e11cdf648a15dff74a",
	            "SandboxKey": "/var/run/docker/netns/345adc76212a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-256959": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:19:c4:dc:e5:59",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "08d9e23f02a4d7730d420d79f658bc1854aa3d62ee2a54a8cd34a455b2ba0431",
	                    "EndpointID": "e780ab70cd5a9e96f54f2a272324b26b9e51bece9b706db46ac5aff93fb5ac56",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-256959",
	                        "361f9c16c44a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-256959 -n newest-cni-256959
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-256959 -n newest-cni-256959: exit status 2 (337.000881ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-256959 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-256959 logs -n 25: (1.537623838s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ delete  │ -p default-k8s-diff-port-971096                                                                                                                                                                                                                            │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ delete  │ -p default-k8s-diff-port-971096                                                                                                                                                                                                                            │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ delete  │ -p disable-driver-mounts-539158                                                                                                                                                                                                                            │ disable-driver-mounts-539158 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ start   │ -p no-preload-361053 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-361053            │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-648696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ stop    │ -p embed-certs-648696 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ addons  │ enable dashboard -p embed-certs-648696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ start   │ -p embed-certs-648696 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:24 UTC │
	│ image   │ embed-certs-648696 image list --format=json                                                                                                                                                                                                                │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ pause   │ -p embed-certs-648696 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ unpause │ -p embed-certs-648696 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ delete  │ -p embed-certs-648696                                                                                                                                                                                                                                      │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ delete  │ -p embed-certs-648696                                                                                                                                                                                                                                      │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ start   │ -p newest-cni-256959 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-256959            │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-361053 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-361053            │ jenkins │ v1.37.0 │ 12 Dec 25 01:31 UTC │                     │
	│ stop    │ -p no-preload-361053 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-361053            │ jenkins │ v1.37.0 │ 12 Dec 25 01:33 UTC │ 12 Dec 25 01:33 UTC │
	│ addons  │ enable dashboard -p no-preload-361053 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-361053            │ jenkins │ v1.37.0 │ 12 Dec 25 01:33 UTC │ 12 Dec 25 01:33 UTC │
	│ start   │ -p no-preload-361053 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-361053            │ jenkins │ v1.37.0 │ 12 Dec 25 01:33 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-256959 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-256959            │ jenkins │ v1.37.0 │ 12 Dec 25 01:33 UTC │                     │
	│ stop    │ -p newest-cni-256959 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-256959            │ jenkins │ v1.37.0 │ 12 Dec 25 01:35 UTC │ 12 Dec 25 01:35 UTC │
	│ addons  │ enable dashboard -p newest-cni-256959 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-256959            │ jenkins │ v1.37.0 │ 12 Dec 25 01:35 UTC │ 12 Dec 25 01:35 UTC │
	│ start   │ -p newest-cni-256959 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-256959            │ jenkins │ v1.37.0 │ 12 Dec 25 01:35 UTC │                     │
	│ image   │ newest-cni-256959 image list --format=json                                                                                                                                                                                                                 │ newest-cni-256959            │ jenkins │ v1.37.0 │ 12 Dec 25 01:41 UTC │ 12 Dec 25 01:41 UTC │
	│ pause   │ -p newest-cni-256959 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-256959            │ jenkins │ v1.37.0 │ 12 Dec 25 01:41 UTC │ 12 Dec 25 01:41 UTC │
	│ unpause │ -p newest-cni-256959 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-256959            │ jenkins │ v1.37.0 │ 12 Dec 25 01:41 UTC │ 12 Dec 25 01:41 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 01:35:11
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 01:35:11.336080  291455 out.go:360] Setting OutFile to fd 1 ...
	I1212 01:35:11.336277  291455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:35:11.336290  291455 out.go:374] Setting ErrFile to fd 2...
	I1212 01:35:11.336296  291455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:35:11.336566  291455 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	I1212 01:35:11.336950  291455 out.go:368] Setting JSON to false
	I1212 01:35:11.337843  291455 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8258,"bootTime":1765495054,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1212 01:35:11.337913  291455 start.go:143] virtualization:  
	I1212 01:35:11.341103  291455 out.go:179] * [newest-cni-256959] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 01:35:11.345273  291455 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 01:35:11.345376  291455 notify.go:221] Checking for updates...
	I1212 01:35:11.351231  291455 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 01:35:11.354134  291455 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 01:35:11.357086  291455 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	I1212 01:35:11.359981  291455 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 01:35:11.363090  291455 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 01:35:11.366381  291455 config.go:182] Loaded profile config "newest-cni-256959": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 01:35:11.367076  291455 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 01:35:11.397719  291455 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 01:35:11.397845  291455 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:35:11.450218  291455 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 01:35:11.441400779 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:35:11.450324  291455 docker.go:319] overlay module found
	I1212 01:35:11.453495  291455 out.go:179] * Using the docker driver based on existing profile
	I1212 01:35:11.456257  291455 start.go:309] selected driver: docker
	I1212 01:35:11.456272  291455 start.go:927] validating driver "docker" against &{Name:newest-cni-256959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:35:11.456385  291455 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 01:35:11.457105  291455 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:35:11.512167  291455 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 01:35:11.503270098 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:35:11.512501  291455 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1212 01:35:11.512533  291455 cni.go:84] Creating CNI manager for ""
	I1212 01:35:11.512581  291455 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 01:35:11.512620  291455 start.go:353] cluster config:
	{Name:newest-cni-256959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:35:11.517595  291455 out.go:179] * Starting "newest-cni-256959" primary control-plane node in "newest-cni-256959" cluster
	I1212 01:35:11.520355  291455 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1212 01:35:11.523510  291455 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 01:35:11.526310  291455 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 01:35:11.526350  291455 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1212 01:35:11.526380  291455 cache.go:65] Caching tarball of preloaded images
	I1212 01:35:11.526401  291455 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 01:35:11.526463  291455 preload.go:238] Found /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1212 01:35:11.526474  291455 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1212 01:35:11.526577  291455 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/config.json ...
	I1212 01:35:11.545949  291455 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 01:35:11.545972  291455 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 01:35:11.545990  291455 cache.go:243] Successfully downloaded all kic artifacts
	I1212 01:35:11.546021  291455 start.go:360] acquireMachinesLock for newest-cni-256959: {Name:mke4c35c218ad59b1da2c46074b57e71134fc7be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:35:11.546106  291455 start.go:364] duration metric: took 61.449µs to acquireMachinesLock for "newest-cni-256959"
	I1212 01:35:11.546128  291455 start.go:96] Skipping create...Using existing machine configuration
	I1212 01:35:11.546140  291455 fix.go:54] fixHost starting: 
	I1212 01:35:11.546394  291455 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Status}}
	I1212 01:35:11.562986  291455 fix.go:112] recreateIfNeeded on newest-cni-256959: state=Stopped err=<nil>
	W1212 01:35:11.563044  291455 fix.go:138] unexpected machine state, will restart: <nil>
	W1212 01:35:12.535792  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:35:12.641222  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:35:12.704850  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:35:12.704951  287206 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 01:35:12.708213  287206 out.go:179] * Enabled addons: 
	I1212 01:35:12.711265  287206 addons.go:530] duration metric: took 1m55.054971797s for enable addons: enabled=[]
	W1212 01:35:14.536558  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:35:11.566225  291455 out.go:252] * Restarting existing docker container for "newest-cni-256959" ...
	I1212 01:35:11.566307  291455 cli_runner.go:164] Run: docker start newest-cni-256959
	I1212 01:35:11.824711  291455 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Status}}
	I1212 01:35:11.850549  291455 kic.go:430] container "newest-cni-256959" state is running.
	I1212 01:35:11.850948  291455 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-256959
	I1212 01:35:11.874496  291455 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/config.json ...
	I1212 01:35:11.875491  291455 machine.go:94] provisionDockerMachine start ...
	I1212 01:35:11.875566  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:11.904543  291455 main.go:143] libmachine: Using SSH client type: native
	I1212 01:35:11.904867  291455 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1212 01:35:11.904894  291455 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 01:35:11.905649  291455 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 01:35:15.062841  291455 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-256959
	
	I1212 01:35:15.062884  291455 ubuntu.go:182] provisioning hostname "newest-cni-256959"
	I1212 01:35:15.062966  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:15.081374  291455 main.go:143] libmachine: Using SSH client type: native
	I1212 01:35:15.081715  291455 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1212 01:35:15.081732  291455 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-256959 && echo "newest-cni-256959" | sudo tee /etc/hostname
	I1212 01:35:15.244594  291455 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-256959
	
	I1212 01:35:15.244717  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:15.262885  291455 main.go:143] libmachine: Using SSH client type: native
	I1212 01:35:15.263226  291455 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1212 01:35:15.263249  291455 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-256959' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-256959/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-256959' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:35:15.415381  291455 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:35:15.415407  291455 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-2343/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-2343/.minikube}
	I1212 01:35:15.415450  291455 ubuntu.go:190] setting up certificates
	I1212 01:35:15.415469  291455 provision.go:84] configureAuth start
	I1212 01:35:15.415542  291455 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-256959
	I1212 01:35:15.432184  291455 provision.go:143] copyHostCerts
	I1212 01:35:15.432260  291455 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem, removing ...
	I1212 01:35:15.432274  291455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem
	I1212 01:35:15.432771  291455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem (1082 bytes)
	I1212 01:35:15.432891  291455 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem, removing ...
	I1212 01:35:15.432905  291455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem
	I1212 01:35:15.432935  291455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem (1123 bytes)
	I1212 01:35:15.433008  291455 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem, removing ...
	I1212 01:35:15.433018  291455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem
	I1212 01:35:15.433044  291455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem (1675 bytes)
	I1212 01:35:15.433100  291455 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem org=jenkins.newest-cni-256959 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-256959]
	I1212 01:35:15.664957  291455 provision.go:177] copyRemoteCerts
	I1212 01:35:15.665025  291455 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:35:15.665084  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:15.682010  291455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:35:15.786690  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 01:35:15.804464  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 01:35:15.821597  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 01:35:15.838753  291455 provision.go:87] duration metric: took 423.263374ms to configureAuth
	I1212 01:35:15.838782  291455 ubuntu.go:206] setting minikube options for container-runtime
	I1212 01:35:15.839040  291455 config.go:182] Loaded profile config "newest-cni-256959": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 01:35:15.839053  291455 machine.go:97] duration metric: took 3.963544394s to provisionDockerMachine
	I1212 01:35:15.839061  291455 start.go:293] postStartSetup for "newest-cni-256959" (driver="docker")
	I1212 01:35:15.839072  291455 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:35:15.839119  291455 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:35:15.839169  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:15.855712  291455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:35:15.959303  291455 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:35:15.962341  291455 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 01:35:15.962368  291455 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 01:35:15.962380  291455 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/addons for local assets ...
	I1212 01:35:15.962429  291455 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/files for local assets ...
	I1212 01:35:15.962509  291455 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem -> 42902.pem in /etc/ssl/certs
	I1212 01:35:15.962609  291455 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:35:15.969472  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /etc/ssl/certs/42902.pem (1708 bytes)
	I1212 01:35:15.986194  291455 start.go:296] duration metric: took 147.119175ms for postStartSetup
	I1212 01:35:15.986304  291455 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 01:35:15.986375  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:16.005019  291455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:35:16.107859  291455 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 01:35:16.112663  291455 fix.go:56] duration metric: took 4.566516262s for fixHost
	I1212 01:35:16.112691  291455 start.go:83] releasing machines lock for "newest-cni-256959", held for 4.566573288s
	I1212 01:35:16.112760  291455 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-256959
	I1212 01:35:16.129477  291455 ssh_runner.go:195] Run: cat /version.json
	I1212 01:35:16.129531  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:16.129775  291455 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:35:16.129824  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:16.153158  291455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:35:16.155921  291455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:35:16.367474  291455 ssh_runner.go:195] Run: systemctl --version
	I1212 01:35:16.373832  291455 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:35:16.378022  291455 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:35:16.378104  291455 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:35:16.385747  291455 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 01:35:16.385772  291455 start.go:496] detecting cgroup driver to use...
	I1212 01:35:16.385819  291455 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 01:35:16.385882  291455 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 01:35:16.403657  291455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 01:35:16.417469  291455 docker.go:218] disabling cri-docker service (if available) ...
	I1212 01:35:16.417564  291455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:35:16.433612  291455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:35:16.446861  291455 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:35:16.554018  291455 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:35:16.672193  291455 docker.go:234] disabling docker service ...
	I1212 01:35:16.672283  291455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:35:16.687238  291455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:35:16.700659  291455 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:35:16.812563  291455 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:35:16.928270  291455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:35:16.941185  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:35:16.957067  291455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 01:35:16.966276  291455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 01:35:16.975221  291455 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 01:35:16.975292  291455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 01:35:16.984294  291455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 01:35:16.993328  291455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 01:35:17.004796  291455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 01:35:17.015289  291455 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:35:17.023922  291455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 01:35:17.036658  291455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 01:35:17.046732  291455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 01:35:17.056354  291455 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:35:17.064063  291455 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:35:17.071833  291455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:35:17.188012  291455 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 01:35:17.306110  291455 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1212 01:35:17.306231  291455 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1212 01:35:17.309882  291455 start.go:564] Will wait 60s for crictl version
	I1212 01:35:17.309968  291455 ssh_runner.go:195] Run: which crictl
	I1212 01:35:17.313475  291455 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 01:35:17.340045  291455 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1212 01:35:17.340140  291455 ssh_runner.go:195] Run: containerd --version
	I1212 01:35:17.360301  291455 ssh_runner.go:195] Run: containerd --version
	I1212 01:35:17.385714  291455 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1212 01:35:17.388490  291455 cli_runner.go:164] Run: docker network inspect newest-cni-256959 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 01:35:17.404979  291455 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1212 01:35:17.409350  291455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:35:17.422610  291455 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1212 01:35:17.425426  291455 kubeadm.go:884] updating cluster {Name:newest-cni-256959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:35:17.425578  291455 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 01:35:17.425675  291455 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:35:17.450191  291455 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 01:35:17.450217  291455 containerd.go:534] Images already preloaded, skipping extraction
	I1212 01:35:17.450277  291455 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:35:17.474185  291455 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 01:35:17.474220  291455 cache_images.go:86] Images are preloaded, skipping loading
	I1212 01:35:17.474228  291455 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1212 01:35:17.474373  291455 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-256959 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:35:17.474472  291455 ssh_runner.go:195] Run: sudo crictl info
	I1212 01:35:17.498662  291455 cni.go:84] Creating CNI manager for ""
	I1212 01:35:17.498685  291455 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 01:35:17.498869  291455 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1212 01:35:17.498905  291455 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-256959 NodeName:newest-cni-256959 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 01:35:17.499182  291455 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-256959"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:35:17.499276  291455 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 01:35:17.511920  291455 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 01:35:17.512017  291455 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:35:17.519602  291455 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1212 01:35:17.532107  291455 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 01:35:17.545262  291455 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1212 01:35:17.557618  291455 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1212 01:35:17.561053  291455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:35:17.570894  291455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:35:17.675958  291455 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:35:17.692695  291455 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959 for IP: 192.168.76.2
	I1212 01:35:17.692715  291455 certs.go:195] generating shared ca certs ...
	I1212 01:35:17.692750  291455 certs.go:227] acquiring lock for ca certs: {Name:mk18ed2fce74cbc4ee01c0f71e2dbdd98ccce1cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:35:17.692911  291455 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key
	I1212 01:35:17.692980  291455 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key
	I1212 01:35:17.692995  291455 certs.go:257] generating profile certs ...
	I1212 01:35:17.693112  291455 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/client.key
	I1212 01:35:17.693202  291455 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.key.b05ecb93
	I1212 01:35:17.693309  291455 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.key
	I1212 01:35:17.693447  291455 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem (1338 bytes)
	W1212 01:35:17.693518  291455 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290_empty.pem, impossibly tiny 0 bytes
	I1212 01:35:17.693536  291455 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 01:35:17.693582  291455 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem (1082 bytes)
	I1212 01:35:17.693632  291455 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:35:17.693666  291455 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem (1675 bytes)
	I1212 01:35:17.693747  291455 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem (1708 bytes)
	I1212 01:35:17.694397  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:35:17.712974  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 01:35:17.738035  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:35:17.758905  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:35:17.776423  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 01:35:17.805243  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 01:35:17.826665  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:35:17.847012  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 01:35:17.868946  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:35:17.887272  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem --> /usr/share/ca-certificates/4290.pem (1338 bytes)
	I1212 01:35:17.904023  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /usr/share/ca-certificates/42902.pem (1708 bytes)
	I1212 01:35:17.920802  291455 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:35:17.933645  291455 ssh_runner.go:195] Run: openssl version
	I1212 01:35:17.939797  291455 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:35:17.946909  291455 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 01:35:17.954537  291455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:35:17.958217  291455 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:35:17.958301  291455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:35:17.998878  291455 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 01:35:18.008093  291455 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4290.pem
	I1212 01:35:18.016725  291455 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4290.pem /etc/ssl/certs/4290.pem
	I1212 01:35:18.025237  291455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4290.pem
	I1212 01:35:18.029387  291455 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:06 /usr/share/ca-certificates/4290.pem
	I1212 01:35:18.029458  291455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4290.pem
	I1212 01:35:18.072423  291455 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 01:35:18.080329  291455 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42902.pem
	I1212 01:35:18.088043  291455 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42902.pem /etc/ssl/certs/42902.pem
	I1212 01:35:18.095703  291455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42902.pem
	I1212 01:35:18.100065  291455 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:06 /usr/share/ca-certificates/42902.pem
	I1212 01:35:18.100135  291455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42902.pem
	I1212 01:35:18.141016  291455 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 01:35:18.148423  291455 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:35:18.152541  291455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 01:35:18.195372  291455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 01:35:18.236073  291455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 01:35:18.276924  291455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 01:35:18.317697  291455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 01:35:18.358213  291455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 01:35:18.400083  291455 kubeadm.go:401] StartCluster: {Name:newest-cni-256959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:35:18.400177  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1212 01:35:18.400236  291455 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:35:18.437669  291455 cri.go:89] found id: ""
	I1212 01:35:18.437744  291455 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:35:18.446134  291455 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 01:35:18.446156  291455 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 01:35:18.446208  291455 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 01:35:18.453928  291455 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 01:35:18.454522  291455 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-256959" does not appear in /home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 01:35:18.454766  291455 kubeconfig.go:62] /home/jenkins/minikube-integration/22101-2343/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-256959" cluster setting kubeconfig missing "newest-cni-256959" context setting]
	I1212 01:35:18.455226  291455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/kubeconfig: {Name:mk3e2cbffa089f0a26351f5f6a49b8ed3b66edd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:35:18.456674  291455 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 01:35:18.464597  291455 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1212 01:35:18.464630  291455 kubeadm.go:602] duration metric: took 18.46826ms to restartPrimaryControlPlane
	I1212 01:35:18.464640  291455 kubeadm.go:403] duration metric: took 64.568702ms to StartCluster
	I1212 01:35:18.464656  291455 settings.go:142] acquiring lock: {Name:mk6dd4250df69aeba4752e9f33aeef37272375c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:35:18.464716  291455 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 01:35:18.465619  291455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/kubeconfig: {Name:mk3e2cbffa089f0a26351f5f6a49b8ed3b66edd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:35:18.465827  291455 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1212 01:35:18.466211  291455 config.go:182] Loaded profile config "newest-cni-256959": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 01:35:18.466236  291455 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 01:35:18.466355  291455 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-256959"
	I1212 01:35:18.466367  291455 addons.go:70] Setting dashboard=true in profile "newest-cni-256959"
	I1212 01:35:18.466371  291455 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-256959"
	I1212 01:35:18.466378  291455 addons.go:239] Setting addon dashboard=true in "newest-cni-256959"
	W1212 01:35:18.466385  291455 addons.go:248] addon dashboard should already be in state true
	I1212 01:35:18.466396  291455 host.go:66] Checking if "newest-cni-256959" exists ...
	I1212 01:35:18.466403  291455 host.go:66] Checking if "newest-cni-256959" exists ...
	I1212 01:35:18.466836  291455 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Status}}
	I1212 01:35:18.466869  291455 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Status}}
	I1212 01:35:18.467337  291455 addons.go:70] Setting default-storageclass=true in profile "newest-cni-256959"
	I1212 01:35:18.467363  291455 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-256959"
	I1212 01:35:18.467641  291455 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Status}}
	I1212 01:35:18.469758  291455 out.go:179] * Verifying Kubernetes components...
	I1212 01:35:18.473053  291455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:35:18.505578  291455 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:35:18.507992  291455 addons.go:239] Setting addon default-storageclass=true in "newest-cni-256959"
	I1212 01:35:18.508032  291455 host.go:66] Checking if "newest-cni-256959" exists ...
	I1212 01:35:18.508443  291455 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Status}}
	I1212 01:35:18.515343  291455 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:35:18.515364  291455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 01:35:18.515428  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:18.518345  291455 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1212 01:35:18.523100  291455 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1212 01:35:17.036393  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:19.036650  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:35:18.525972  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1212 01:35:18.526002  291455 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1212 01:35:18.526079  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:18.564602  291455 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 01:35:18.564630  291455 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 01:35:18.564700  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:18.565404  291455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:35:18.592490  291455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:35:18.614974  291455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:35:18.707284  291455 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:35:18.738514  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:35:18.783779  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1212 01:35:18.783804  291455 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1212 01:35:18.797813  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:35:18.817201  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1212 01:35:18.817275  291455 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1212 01:35:18.834247  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1212 01:35:18.834268  291455 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1212 01:35:18.850261  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1212 01:35:18.850281  291455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1212 01:35:18.864878  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1212 01:35:18.864902  291455 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1212 01:35:18.879989  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1212 01:35:18.880012  291455 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1212 01:35:18.893252  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1212 01:35:18.893275  291455 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1212 01:35:18.906457  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1212 01:35:18.906522  291455 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1212 01:35:18.919410  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 01:35:18.919484  291455 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1212 01:35:18.931957  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 01:35:19.295481  291455 api_server.go:52] waiting for apiserver process to appear ...
	W1212 01:35:19.295638  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.295690  291455 retry.go:31] will retry after 249.842732ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:35:19.295768  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.295783  291455 retry.go:31] will retry after 351.420897ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:35:19.296118  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.296142  291455 retry.go:31] will retry after 281.426587ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.296213  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:19.546048  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:35:19.578494  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:35:19.622946  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.623064  291455 retry.go:31] will retry after 277.166543ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.648375  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:35:19.656309  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.656406  291455 retry.go:31] will retry after 462.607475ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:35:19.715463  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.715506  291455 retry.go:31] will retry after 556.232924ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.796674  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:19.900383  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:35:19.963236  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.963266  291455 retry.go:31] will retry after 505.253944ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.119589  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:35:20.186519  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.186613  291455 retry.go:31] will retry after 424.835438ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.272893  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:35:20.296648  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:20.336051  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.336183  291455 retry.go:31] will retry after 483.909657ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.469348  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:35:20.528062  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.528096  291455 retry.go:31] will retry after 804.643976ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.612336  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:35:20.682501  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.682548  291455 retry.go:31] will retry after 558.97301ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.795783  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:20.820454  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:35:20.905698  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.905732  291455 retry.go:31] will retry after 695.755311ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:21.242222  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 01:35:21.295663  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:21.312788  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:21.312824  291455 retry.go:31] will retry after 1.866088371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:21.333223  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:35:21.536481  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:23.536603  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:21.395495  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:21.395527  291455 retry.go:31] will retry after 1.442265452s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:21.601699  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:35:21.661918  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:21.661958  291455 retry.go:31] will retry after 965.923553ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:21.796193  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:22.296596  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:22.628164  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:35:22.689983  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:22.690024  291455 retry.go:31] will retry after 2.419076287s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:22.796215  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:22.838490  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:35:22.896567  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:22.896595  291455 retry.go:31] will retry after 1.026441386s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:23.180088  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:35:23.242606  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:23.242641  291455 retry.go:31] will retry after 1.447175367s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:23.295985  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:23.795677  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:23.924269  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:35:23.999262  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:23.999301  291455 retry.go:31] will retry after 3.676300513s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:24.295744  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:24.690891  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:35:24.751142  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:24.751178  291455 retry.go:31] will retry after 2.523379824s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:24.796474  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:25.109290  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:35:25.170081  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:25.170117  291455 retry.go:31] will retry after 1.61445699s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:25.296317  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:25.796411  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:26.295885  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:26.036848  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:28.536033  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:35:26.784844  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:35:26.796101  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:26.910864  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:26.910893  291455 retry.go:31] will retry after 5.25056634s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:27.275356  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 01:35:27.295815  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:27.348749  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:27.348785  291455 retry.go:31] will retry after 4.97523733s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:27.676221  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:35:27.738144  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:27.738177  291455 retry.go:31] will retry after 5.096436926s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:27.796329  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:28.296194  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:28.795721  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:29.296646  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:29.795689  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:30.295694  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:30.796607  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:31.296202  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:30.536109  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:32.536508  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:35.036562  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:35:31.795914  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:32.161653  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:35:32.223763  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:32.223796  291455 retry.go:31] will retry after 3.268815276s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:32.296204  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:32.325119  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:35:32.386121  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:32.386153  291455 retry.go:31] will retry after 5.854435808s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:32.796226  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:32.834968  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:35:32.909984  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:32.910017  291455 retry.go:31] will retry after 7.163447884s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:33.296541  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:33.796667  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:34.295628  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:34.796652  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:35.295756  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:35.493366  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:35:35.556021  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:35.556054  291455 retry.go:31] will retry after 12.955659755s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:35.796356  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:36.296236  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:37.036788  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:39.536591  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:35:36.796391  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:37.295746  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:37.795722  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:38.241525  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 01:35:38.295983  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:38.315189  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:38.315224  291455 retry.go:31] will retry after 8.402358708s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:38.795800  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:39.296313  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:39.795769  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:40.074570  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:35:40.142371  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:40.142407  291455 retry.go:31] will retry after 11.797804339s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:40.295684  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:40.795715  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:41.295800  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:42.035934  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:44.036480  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:35:41.796201  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:42.295677  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:42.795870  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:43.296206  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:43.795818  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:44.295727  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:44.795706  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:45.296501  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:45.795731  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:46.296084  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:46.536110  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:48.536515  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:35:46.717860  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:35:46.778291  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:46.778324  291455 retry.go:31] will retry after 11.640937008s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:46.796419  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:47.296365  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:47.796242  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:48.295728  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:48.512617  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:35:48.620306  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:48.620334  291455 retry.go:31] will retry after 20.936993287s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:48.795684  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:49.296228  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:49.796588  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:50.296351  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:50.796261  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:51.296609  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:50.536753  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:53.036546  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:35:51.796731  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:51.941351  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:35:52.001637  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:52.001682  291455 retry.go:31] will retry after 15.364088557s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:52.296092  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:52.795636  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:53.296512  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:53.811922  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:54.295780  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:54.795777  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:55.296163  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:55.796273  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:56.295752  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:55.535981  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:57.536499  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:59.536582  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:35:56.795693  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:57.295887  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:57.796459  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:58.296209  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:58.419661  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:35:58.488403  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:58.488438  291455 retry.go:31] will retry after 29.791340434s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:58.796698  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:59.295744  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:59.796477  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:00.295794  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:00.795759  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:01.296237  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:36:02.036574  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:04.036717  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:01.796304  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:02.296424  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:02.795750  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:03.296298  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:03.796668  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:04.296158  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:04.796345  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:05.296665  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:05.796526  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:06.295717  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:36:06.536543  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:09.036693  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:06.795806  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:07.296383  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:07.366524  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:36:07.433303  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:36:07.433335  291455 retry.go:31] will retry after 21.959421138s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:36:07.795756  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:08.296562  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:08.795685  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:09.295744  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:09.558068  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:36:09.643748  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:36:09.643785  291455 retry.go:31] will retry after 31.140330108s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:36:09.796018  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:10.295683  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:10.795744  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:11.295780  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:36:11.536613  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:13.536774  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:11.795645  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:12.295717  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:12.795762  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:13.296234  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:13.795775  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:14.296543  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:14.796297  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:15.295763  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:15.795884  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:16.296551  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:36:16.036849  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:18.536512  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:16.796640  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:17.295760  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:17.796208  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:18.296641  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:18.795858  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:18.795946  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:18.819559  291455 cri.go:89] found id: ""
	I1212 01:36:18.819585  291455 logs.go:282] 0 containers: []
	W1212 01:36:18.819594  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:18.819605  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:18.819671  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:18.843419  291455 cri.go:89] found id: ""
	I1212 01:36:18.843444  291455 logs.go:282] 0 containers: []
	W1212 01:36:18.843453  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:18.843459  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:18.843524  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:18.867870  291455 cri.go:89] found id: ""
	I1212 01:36:18.867894  291455 logs.go:282] 0 containers: []
	W1212 01:36:18.867903  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:18.867910  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:18.867975  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:18.892504  291455 cri.go:89] found id: ""
	I1212 01:36:18.892528  291455 logs.go:282] 0 containers: []
	W1212 01:36:18.892536  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:18.892543  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:18.892614  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:18.916462  291455 cri.go:89] found id: ""
	I1212 01:36:18.916484  291455 logs.go:282] 0 containers: []
	W1212 01:36:18.916493  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:18.916499  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:18.916555  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:18.940793  291455 cri.go:89] found id: ""
	I1212 01:36:18.940818  291455 logs.go:282] 0 containers: []
	W1212 01:36:18.940827  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:18.940833  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:18.940892  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:18.965485  291455 cri.go:89] found id: ""
	I1212 01:36:18.965513  291455 logs.go:282] 0 containers: []
	W1212 01:36:18.965521  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:18.965527  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:18.965585  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:18.990141  291455 cri.go:89] found id: ""
	I1212 01:36:18.990170  291455 logs.go:282] 0 containers: []
	W1212 01:36:18.990179  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:18.990189  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:18.990202  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:19.044826  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:19.044860  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:19.058338  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:19.058373  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:19.121541  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:19.113010    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:19.113711    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:19.115490    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:19.116077    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:19.117640    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:19.113010    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:19.113711    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:19.115490    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:19.116077    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:19.117640    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:19.121602  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:19.121622  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:19.146904  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:19.146941  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:36:21.036609  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:23.536552  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:21.678937  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:21.689641  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:21.689710  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:21.722833  291455 cri.go:89] found id: ""
	I1212 01:36:21.722854  291455 logs.go:282] 0 containers: []
	W1212 01:36:21.722862  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:21.722869  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:21.722926  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:21.747286  291455 cri.go:89] found id: ""
	I1212 01:36:21.747323  291455 logs.go:282] 0 containers: []
	W1212 01:36:21.747339  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:21.747346  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:21.747417  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:21.771941  291455 cri.go:89] found id: ""
	I1212 01:36:21.771965  291455 logs.go:282] 0 containers: []
	W1212 01:36:21.771980  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:21.771987  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:21.772052  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:21.801075  291455 cri.go:89] found id: ""
	I1212 01:36:21.801104  291455 logs.go:282] 0 containers: []
	W1212 01:36:21.801113  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:21.801119  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:21.801176  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:21.825561  291455 cri.go:89] found id: ""
	I1212 01:36:21.825587  291455 logs.go:282] 0 containers: []
	W1212 01:36:21.825595  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:21.825601  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:21.825659  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:21.854532  291455 cri.go:89] found id: ""
	I1212 01:36:21.854559  291455 logs.go:282] 0 containers: []
	W1212 01:36:21.854569  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:21.854580  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:21.854640  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:21.879725  291455 cri.go:89] found id: ""
	I1212 01:36:21.879789  291455 logs.go:282] 0 containers: []
	W1212 01:36:21.879814  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:21.879828  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:21.879912  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:21.904405  291455 cri.go:89] found id: ""
	I1212 01:36:21.904428  291455 logs.go:282] 0 containers: []
	W1212 01:36:21.904437  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:21.904446  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:21.904487  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:21.970611  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:21.962223    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:21.962657    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:21.964375    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:21.964860    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:21.966282    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:21.962223    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:21.962657    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:21.964375    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:21.964860    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:21.966282    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:21.970642  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:21.970659  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:21.995425  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:21.995463  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:22.024736  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:22.024767  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:22.082740  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:22.082785  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:24.597828  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:24.608497  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:24.608573  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:24.633951  291455 cri.go:89] found id: ""
	I1212 01:36:24.633978  291455 logs.go:282] 0 containers: []
	W1212 01:36:24.633986  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:24.633992  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:24.634048  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:24.658904  291455 cri.go:89] found id: ""
	I1212 01:36:24.658929  291455 logs.go:282] 0 containers: []
	W1212 01:36:24.658937  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:24.658944  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:24.659026  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:24.683684  291455 cri.go:89] found id: ""
	I1212 01:36:24.683709  291455 logs.go:282] 0 containers: []
	W1212 01:36:24.683718  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:24.683724  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:24.683791  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:24.708745  291455 cri.go:89] found id: ""
	I1212 01:36:24.708770  291455 logs.go:282] 0 containers: []
	W1212 01:36:24.708779  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:24.708786  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:24.708842  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:24.733454  291455 cri.go:89] found id: ""
	I1212 01:36:24.733479  291455 logs.go:282] 0 containers: []
	W1212 01:36:24.733488  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:24.733494  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:24.733551  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:24.761862  291455 cri.go:89] found id: ""
	I1212 01:36:24.761889  291455 logs.go:282] 0 containers: []
	W1212 01:36:24.761898  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:24.761904  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:24.761961  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:24.785388  291455 cri.go:89] found id: ""
	I1212 01:36:24.785415  291455 logs.go:282] 0 containers: []
	W1212 01:36:24.785424  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:24.785430  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:24.785486  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:24.810681  291455 cri.go:89] found id: ""
	I1212 01:36:24.810707  291455 logs.go:282] 0 containers: []
	W1212 01:36:24.810717  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:24.810727  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:24.810743  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:24.865711  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:24.865752  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:24.880399  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:24.880431  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:24.943187  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:24.935391    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:24.936083    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:24.937614    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:24.937904    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:24.939457    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:24.935391    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:24.936083    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:24.937614    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:24.937904    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:24.939457    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:24.943253  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:24.943274  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:24.967790  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:24.967820  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:36:26.036483  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:28.036687  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:30.036781  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:27.495634  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:27.506605  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:27.506700  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:27.548836  291455 cri.go:89] found id: ""
	I1212 01:36:27.548864  291455 logs.go:282] 0 containers: []
	W1212 01:36:27.548873  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:27.548879  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:27.548953  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:27.600295  291455 cri.go:89] found id: ""
	I1212 01:36:27.600324  291455 logs.go:282] 0 containers: []
	W1212 01:36:27.600334  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:27.600340  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:27.600397  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:27.625951  291455 cri.go:89] found id: ""
	I1212 01:36:27.625979  291455 logs.go:282] 0 containers: []
	W1212 01:36:27.625987  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:27.625993  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:27.626062  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:27.651635  291455 cri.go:89] found id: ""
	I1212 01:36:27.651660  291455 logs.go:282] 0 containers: []
	W1212 01:36:27.651668  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:27.651675  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:27.651734  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:27.676415  291455 cri.go:89] found id: ""
	I1212 01:36:27.676437  291455 logs.go:282] 0 containers: []
	W1212 01:36:27.676446  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:27.676473  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:27.676535  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:27.699845  291455 cri.go:89] found id: ""
	I1212 01:36:27.699868  291455 logs.go:282] 0 containers: []
	W1212 01:36:27.699876  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:27.699883  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:27.699938  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:27.735327  291455 cri.go:89] found id: ""
	I1212 01:36:27.735353  291455 logs.go:282] 0 containers: []
	W1212 01:36:27.735362  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:27.735368  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:27.735428  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:27.759909  291455 cri.go:89] found id: ""
	I1212 01:36:27.759932  291455 logs.go:282] 0 containers: []
	W1212 01:36:27.759940  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:27.759950  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:27.759961  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:27.786638  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:27.786667  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:27.841026  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:27.841058  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:27.854475  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:27.854508  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:27.917832  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:27.909374    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:27.909866    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:27.911432    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:27.911952    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:27.913437    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:27.909374    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:27.909866    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:27.911432    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:27.911952    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:27.913437    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:27.917855  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:27.917867  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:28.286241  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:36:28.389245  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:36:28.389279  291455 retry.go:31] will retry after 46.053342505s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:36:29.393036  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:36:29.455460  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:36:29.455496  291455 retry.go:31] will retry after 47.570792587s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:36:30.443136  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:30.453668  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:30.453743  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:30.480117  291455 cri.go:89] found id: ""
	I1212 01:36:30.480141  291455 logs.go:282] 0 containers: []
	W1212 01:36:30.480149  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:30.480155  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:30.480214  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:30.505432  291455 cri.go:89] found id: ""
	I1212 01:36:30.505460  291455 logs.go:282] 0 containers: []
	W1212 01:36:30.505470  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:30.505478  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:30.505543  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:30.530571  291455 cri.go:89] found id: ""
	I1212 01:36:30.530598  291455 logs.go:282] 0 containers: []
	W1212 01:36:30.530608  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:30.530614  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:30.530675  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:30.587393  291455 cri.go:89] found id: ""
	I1212 01:36:30.587429  291455 logs.go:282] 0 containers: []
	W1212 01:36:30.587439  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:30.587445  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:30.587517  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:30.631827  291455 cri.go:89] found id: ""
	I1212 01:36:30.631894  291455 logs.go:282] 0 containers: []
	W1212 01:36:30.631917  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:30.631941  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:30.632019  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:30.655968  291455 cri.go:89] found id: ""
	I1212 01:36:30.656043  291455 logs.go:282] 0 containers: []
	W1212 01:36:30.656065  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:30.656077  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:30.656143  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:30.680079  291455 cri.go:89] found id: ""
	I1212 01:36:30.680101  291455 logs.go:282] 0 containers: []
	W1212 01:36:30.680110  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:30.680116  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:30.680175  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:30.704249  291455 cri.go:89] found id: ""
	I1212 01:36:30.704324  291455 logs.go:282] 0 containers: []
	W1212 01:36:30.704346  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:30.704365  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:30.704391  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:30.760587  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:30.760620  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:30.774118  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:30.774145  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:30.838730  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:30.831029    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:30.831642    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:30.833120    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:30.833546    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:30.835035    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:30.831029    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:30.831642    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:30.833120    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:30.833546    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:30.835035    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:30.838753  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:30.838765  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:30.863650  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:30.863684  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:36:32.039431  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:34.536636  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:33.391024  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:33.401417  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:33.401486  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:33.425243  291455 cri.go:89] found id: ""
	I1212 01:36:33.425265  291455 logs.go:282] 0 containers: []
	W1212 01:36:33.425274  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:33.425280  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:33.425337  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:33.451769  291455 cri.go:89] found id: ""
	I1212 01:36:33.451792  291455 logs.go:282] 0 containers: []
	W1212 01:36:33.451800  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:33.451806  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:33.451869  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:33.476935  291455 cri.go:89] found id: ""
	I1212 01:36:33.476960  291455 logs.go:282] 0 containers: []
	W1212 01:36:33.476968  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:33.476974  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:33.477035  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:33.502755  291455 cri.go:89] found id: ""
	I1212 01:36:33.502781  291455 logs.go:282] 0 containers: []
	W1212 01:36:33.502796  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:33.502802  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:33.502859  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:33.528810  291455 cri.go:89] found id: ""
	I1212 01:36:33.528835  291455 logs.go:282] 0 containers: []
	W1212 01:36:33.528844  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:33.528851  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:33.528915  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:33.559119  291455 cri.go:89] found id: ""
	I1212 01:36:33.559197  291455 logs.go:282] 0 containers: []
	W1212 01:36:33.559219  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:33.559237  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:33.559321  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:33.624518  291455 cri.go:89] found id: ""
	I1212 01:36:33.624547  291455 logs.go:282] 0 containers: []
	W1212 01:36:33.624556  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:33.624562  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:33.624620  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:33.657379  291455 cri.go:89] found id: ""
	I1212 01:36:33.657401  291455 logs.go:282] 0 containers: []
	W1212 01:36:33.657409  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:33.657418  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:33.657428  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:33.713396  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:33.713430  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:33.727420  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:33.727450  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:33.796759  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:33.788822    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:33.789567    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:33.791169    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:33.791683    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:33.792822    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:33.788822    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:33.789567    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:33.791169    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:33.791683    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:33.792822    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:33.796782  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:33.796795  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:33.822210  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:33.822246  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:36:37.036646  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:39.036700  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:36.350581  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:36.361065  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:36.361139  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:36.384625  291455 cri.go:89] found id: ""
	I1212 01:36:36.384647  291455 logs.go:282] 0 containers: []
	W1212 01:36:36.384655  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:36.384661  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:36.384721  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:36.409313  291455 cri.go:89] found id: ""
	I1212 01:36:36.409338  291455 logs.go:282] 0 containers: []
	W1212 01:36:36.409347  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:36.409353  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:36.409414  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:36.437773  291455 cri.go:89] found id: ""
	I1212 01:36:36.437796  291455 logs.go:282] 0 containers: []
	W1212 01:36:36.437804  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:36.437811  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:36.437875  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:36.462058  291455 cri.go:89] found id: ""
	I1212 01:36:36.462080  291455 logs.go:282] 0 containers: []
	W1212 01:36:36.462089  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:36.462096  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:36.462158  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:36.485881  291455 cri.go:89] found id: ""
	I1212 01:36:36.485902  291455 logs.go:282] 0 containers: []
	W1212 01:36:36.485911  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:36.485917  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:36.485973  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:36.510249  291455 cri.go:89] found id: ""
	I1212 01:36:36.510318  291455 logs.go:282] 0 containers: []
	W1212 01:36:36.510340  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:36.510362  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:36.510444  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:36.546913  291455 cri.go:89] found id: ""
	I1212 01:36:36.546948  291455 logs.go:282] 0 containers: []
	W1212 01:36:36.546957  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:36.546963  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:36.547067  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:36.604532  291455 cri.go:89] found id: ""
	I1212 01:36:36.604562  291455 logs.go:282] 0 containers: []
	W1212 01:36:36.604571  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:36.604580  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:36.604593  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:36.684036  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:36.674581    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:36.675420    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:36.677203    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:36.677878    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:36.679666    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:36.674581    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:36.675420    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:36.677203    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:36.677878    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:36.679666    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:36.684061  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:36.684074  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:36.709835  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:36.709866  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:36.737742  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:36.737768  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:36.792829  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:36.792864  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:39.307416  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:39.317852  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:39.317952  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:39.342723  291455 cri.go:89] found id: ""
	I1212 01:36:39.342747  291455 logs.go:282] 0 containers: []
	W1212 01:36:39.342756  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:39.342763  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:39.342821  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:39.367433  291455 cri.go:89] found id: ""
	I1212 01:36:39.367472  291455 logs.go:282] 0 containers: []
	W1212 01:36:39.367485  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:39.367492  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:39.367559  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:39.392871  291455 cri.go:89] found id: ""
	I1212 01:36:39.392896  291455 logs.go:282] 0 containers: []
	W1212 01:36:39.392904  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:39.392911  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:39.392974  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:39.417519  291455 cri.go:89] found id: ""
	I1212 01:36:39.417546  291455 logs.go:282] 0 containers: []
	W1212 01:36:39.417555  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:39.417562  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:39.417621  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:39.441729  291455 cri.go:89] found id: ""
	I1212 01:36:39.441760  291455 logs.go:282] 0 containers: []
	W1212 01:36:39.441769  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:39.441775  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:39.441841  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:39.466118  291455 cri.go:89] found id: ""
	I1212 01:36:39.466147  291455 logs.go:282] 0 containers: []
	W1212 01:36:39.466156  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:39.466163  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:39.466225  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:39.491269  291455 cri.go:89] found id: ""
	I1212 01:36:39.491292  291455 logs.go:282] 0 containers: []
	W1212 01:36:39.491304  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:39.491310  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:39.491375  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:39.515625  291455 cri.go:89] found id: ""
	I1212 01:36:39.515650  291455 logs.go:282] 0 containers: []
	W1212 01:36:39.515659  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:39.515668  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:39.515679  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:39.595337  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:39.595376  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:39.617464  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:39.617500  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:39.698043  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:39.689431    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:39.689924    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:39.691689    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:39.692010    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:39.693641    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:39.689431    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:39.689924    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:39.691689    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:39.692010    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:39.693641    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:39.698068  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:39.698080  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:39.722656  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:39.722692  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:40.784380  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:36:40.845895  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:36:40.846018  291455 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1212 01:36:41.536608  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:44.036506  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:42.256252  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:42.269504  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:42.269576  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:42.296285  291455 cri.go:89] found id: ""
	I1212 01:36:42.296314  291455 logs.go:282] 0 containers: []
	W1212 01:36:42.296323  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:42.296330  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:42.296393  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:42.324314  291455 cri.go:89] found id: ""
	I1212 01:36:42.324349  291455 logs.go:282] 0 containers: []
	W1212 01:36:42.324366  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:42.324373  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:42.324448  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:42.353000  291455 cri.go:89] found id: ""
	I1212 01:36:42.353024  291455 logs.go:282] 0 containers: []
	W1212 01:36:42.353033  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:42.353039  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:42.353103  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:42.379029  291455 cri.go:89] found id: ""
	I1212 01:36:42.379057  291455 logs.go:282] 0 containers: []
	W1212 01:36:42.379066  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:42.379073  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:42.379141  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:42.404039  291455 cri.go:89] found id: ""
	I1212 01:36:42.404068  291455 logs.go:282] 0 containers: []
	W1212 01:36:42.404077  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:42.404084  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:42.404150  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:42.429848  291455 cri.go:89] found id: ""
	I1212 01:36:42.429877  291455 logs.go:282] 0 containers: []
	W1212 01:36:42.429887  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:42.429893  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:42.429952  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:42.454022  291455 cri.go:89] found id: ""
	I1212 01:36:42.454049  291455 logs.go:282] 0 containers: []
	W1212 01:36:42.454058  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:42.454065  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:42.454126  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:42.481205  291455 cri.go:89] found id: ""
	I1212 01:36:42.481231  291455 logs.go:282] 0 containers: []
	W1212 01:36:42.481240  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:42.481249  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:42.481260  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:42.511373  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:42.511400  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:42.594053  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:42.594092  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:42.613172  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:42.613201  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:42.688118  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:42.678899    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:42.679678    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:42.681197    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:42.681708    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:42.683477    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:42.678899    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:42.679678    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:42.681197    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:42.681708    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:42.683477    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:42.688142  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:42.688155  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:45.213644  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:45.234582  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:45.234677  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:45.268686  291455 cri.go:89] found id: ""
	I1212 01:36:45.268715  291455 logs.go:282] 0 containers: []
	W1212 01:36:45.268732  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:45.268741  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:45.268827  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:45.297061  291455 cri.go:89] found id: ""
	I1212 01:36:45.297115  291455 logs.go:282] 0 containers: []
	W1212 01:36:45.297132  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:45.297139  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:45.297272  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:45.324030  291455 cri.go:89] found id: ""
	I1212 01:36:45.324063  291455 logs.go:282] 0 containers: []
	W1212 01:36:45.324072  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:45.324078  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:45.324144  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:45.354569  291455 cri.go:89] found id: ""
	I1212 01:36:45.354595  291455 logs.go:282] 0 containers: []
	W1212 01:36:45.354612  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:45.354619  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:45.354697  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:45.380068  291455 cri.go:89] found id: ""
	I1212 01:36:45.380133  291455 logs.go:282] 0 containers: []
	W1212 01:36:45.380160  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:45.380175  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:45.380249  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:45.403554  291455 cri.go:89] found id: ""
	I1212 01:36:45.403620  291455 logs.go:282] 0 containers: []
	W1212 01:36:45.403643  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:45.403664  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:45.403746  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:45.426534  291455 cri.go:89] found id: ""
	I1212 01:36:45.426560  291455 logs.go:282] 0 containers: []
	W1212 01:36:45.426568  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:45.426574  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:45.426637  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:45.455346  291455 cri.go:89] found id: ""
	I1212 01:36:45.455414  291455 logs.go:282] 0 containers: []
	W1212 01:36:45.455438  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:45.455457  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:45.455469  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:45.510486  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:45.510521  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:45.523916  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:45.523944  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:45.642152  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:45.624680    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:45.625385    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:45.635164    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:45.635878    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:45.637755    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:45.624680    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:45.625385    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:45.635164    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:45.635878    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:45.637755    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:45.642173  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:45.642186  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:45.667625  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:45.667661  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:36:46.535816  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:48.537737  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:48.197188  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:48.208199  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:48.208272  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:48.236943  291455 cri.go:89] found id: ""
	I1212 01:36:48.236969  291455 logs.go:282] 0 containers: []
	W1212 01:36:48.236977  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:48.236984  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:48.237048  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:48.262444  291455 cri.go:89] found id: ""
	I1212 01:36:48.262468  291455 logs.go:282] 0 containers: []
	W1212 01:36:48.262477  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:48.262483  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:48.262545  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:48.292262  291455 cri.go:89] found id: ""
	I1212 01:36:48.292292  291455 logs.go:282] 0 containers: []
	W1212 01:36:48.292301  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:48.292307  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:48.292370  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:48.318028  291455 cri.go:89] found id: ""
	I1212 01:36:48.318053  291455 logs.go:282] 0 containers: []
	W1212 01:36:48.318063  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:48.318069  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:48.318128  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:48.343500  291455 cri.go:89] found id: ""
	I1212 01:36:48.343524  291455 logs.go:282] 0 containers: []
	W1212 01:36:48.343532  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:48.343539  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:48.343620  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:48.374537  291455 cri.go:89] found id: ""
	I1212 01:36:48.374563  291455 logs.go:282] 0 containers: []
	W1212 01:36:48.374572  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:48.374578  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:48.374657  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:48.399165  291455 cri.go:89] found id: ""
	I1212 01:36:48.399188  291455 logs.go:282] 0 containers: []
	W1212 01:36:48.399197  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:48.399203  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:48.399265  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:48.424429  291455 cri.go:89] found id: ""
	I1212 01:36:48.424452  291455 logs.go:282] 0 containers: []
	W1212 01:36:48.424460  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:48.424469  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:48.424482  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:48.450297  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:48.450336  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:48.477992  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:48.478017  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:48.533513  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:48.533546  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:48.554972  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:48.555078  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:48.639199  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:48.628523    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:48.629323    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:48.630881    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:48.631460    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:48.634979    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:48.628523    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:48.629323    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:48.630881    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:48.631460    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:48.634979    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:51.139443  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:51.152801  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:51.152869  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:51.181036  291455 cri.go:89] found id: ""
	I1212 01:36:51.181060  291455 logs.go:282] 0 containers: []
	W1212 01:36:51.181069  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:51.181076  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:51.181139  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:51.205637  291455 cri.go:89] found id: ""
	I1212 01:36:51.205664  291455 logs.go:282] 0 containers: []
	W1212 01:36:51.205673  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:51.205680  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:51.205744  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:51.230375  291455 cri.go:89] found id: ""
	I1212 01:36:51.230401  291455 logs.go:282] 0 containers: []
	W1212 01:36:51.230410  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:51.230416  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:51.230479  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:51.260594  291455 cri.go:89] found id: ""
	I1212 01:36:51.260620  291455 logs.go:282] 0 containers: []
	W1212 01:36:51.260629  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:51.260636  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:51.260693  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:51.286513  291455 cri.go:89] found id: ""
	I1212 01:36:51.286538  291455 logs.go:282] 0 containers: []
	W1212 01:36:51.286548  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:51.286554  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:51.286613  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:51.320488  291455 cri.go:89] found id: ""
	I1212 01:36:51.320511  291455 logs.go:282] 0 containers: []
	W1212 01:36:51.320519  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:51.320526  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:51.320593  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	W1212 01:36:51.035818  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:53.036491  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:55.036601  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:51.346751  291455 cri.go:89] found id: ""
	I1212 01:36:51.346773  291455 logs.go:282] 0 containers: []
	W1212 01:36:51.346782  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:51.346788  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:51.346848  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:51.372774  291455 cri.go:89] found id: ""
	I1212 01:36:51.372797  291455 logs.go:282] 0 containers: []
	W1212 01:36:51.372805  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:51.372820  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:51.372832  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:51.397287  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:51.397322  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:51.424395  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:51.424423  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:51.484364  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:51.484400  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:51.497751  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:51.497778  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:51.609432  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:51.593650    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:51.595213    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:51.596974    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:51.601995    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:51.602562    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:51.593650    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:51.595213    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:51.596974    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:51.601995    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:51.602562    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:54.111055  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:54.123333  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:54.123404  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:54.147152  291455 cri.go:89] found id: ""
	I1212 01:36:54.147218  291455 logs.go:282] 0 containers: []
	W1212 01:36:54.147246  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:54.147268  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:54.147370  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:54.172120  291455 cri.go:89] found id: ""
	I1212 01:36:54.172186  291455 logs.go:282] 0 containers: []
	W1212 01:36:54.172212  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:54.172233  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:54.172318  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:54.199177  291455 cri.go:89] found id: ""
	I1212 01:36:54.199242  291455 logs.go:282] 0 containers: []
	W1212 01:36:54.199262  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:54.199269  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:54.199346  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:54.223691  291455 cri.go:89] found id: ""
	I1212 01:36:54.223716  291455 logs.go:282] 0 containers: []
	W1212 01:36:54.223724  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:54.223731  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:54.223796  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:54.248969  291455 cri.go:89] found id: ""
	I1212 01:36:54.248991  291455 logs.go:282] 0 containers: []
	W1212 01:36:54.249000  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:54.249007  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:54.249076  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:54.274124  291455 cri.go:89] found id: ""
	I1212 01:36:54.274149  291455 logs.go:282] 0 containers: []
	W1212 01:36:54.274158  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:54.274165  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:54.274223  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:54.299049  291455 cri.go:89] found id: ""
	I1212 01:36:54.299071  291455 logs.go:282] 0 containers: []
	W1212 01:36:54.299079  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:54.299085  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:54.299142  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:54.323692  291455 cri.go:89] found id: ""
	I1212 01:36:54.323727  291455 logs.go:282] 0 containers: []
	W1212 01:36:54.323736  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:54.323745  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:54.323757  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:54.337075  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:54.337102  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:54.405905  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:54.396717    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:54.397409    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:54.399032    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:54.399536    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:54.401700    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:54.396717    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:54.397409    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:54.399032    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:54.399536    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:54.401700    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:54.405927  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:54.405938  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:54.432446  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:54.432489  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:54.461143  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:54.461170  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1212 01:36:57.536480  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:59.536672  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:57.017892  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:57.031680  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:57.031754  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:57.058619  291455 cri.go:89] found id: ""
	I1212 01:36:57.058644  291455 logs.go:282] 0 containers: []
	W1212 01:36:57.058661  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:57.058670  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:57.058744  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:57.082470  291455 cri.go:89] found id: ""
	I1212 01:36:57.082496  291455 logs.go:282] 0 containers: []
	W1212 01:36:57.082505  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:57.082511  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:57.082569  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:57.107129  291455 cri.go:89] found id: ""
	I1212 01:36:57.107152  291455 logs.go:282] 0 containers: []
	W1212 01:36:57.107161  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:57.107174  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:57.107235  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:57.131240  291455 cri.go:89] found id: ""
	I1212 01:36:57.131264  291455 logs.go:282] 0 containers: []
	W1212 01:36:57.131272  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:57.131282  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:57.131339  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:57.161702  291455 cri.go:89] found id: ""
	I1212 01:36:57.161728  291455 logs.go:282] 0 containers: []
	W1212 01:36:57.161737  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:57.161743  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:57.161800  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:57.186568  291455 cri.go:89] found id: ""
	I1212 01:36:57.186592  291455 logs.go:282] 0 containers: []
	W1212 01:36:57.186601  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:57.186607  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:57.186724  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:57.211286  291455 cri.go:89] found id: ""
	I1212 01:36:57.211310  291455 logs.go:282] 0 containers: []
	W1212 01:36:57.211319  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:57.211325  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:57.211382  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:57.236370  291455 cri.go:89] found id: ""
	I1212 01:36:57.236394  291455 logs.go:282] 0 containers: []
	W1212 01:36:57.236403  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:57.236412  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:57.236423  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:57.292504  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:57.292539  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:57.306287  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:57.306314  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:57.369836  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:57.361540    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:57.362207    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:57.363914    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:57.364465    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:57.366079    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:57.361540    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:57.362207    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:57.363914    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:57.364465    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:57.366079    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:57.369856  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:57.369870  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:57.395588  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:57.395625  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:59.923774  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:59.935843  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:59.935936  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:59.961362  291455 cri.go:89] found id: ""
	I1212 01:36:59.961383  291455 logs.go:282] 0 containers: []
	W1212 01:36:59.961392  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:59.961398  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:59.961453  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:59.987418  291455 cri.go:89] found id: ""
	I1212 01:36:59.987448  291455 logs.go:282] 0 containers: []
	W1212 01:36:59.987458  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:59.987463  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:59.987521  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:00.083321  291455 cri.go:89] found id: ""
	I1212 01:37:00.083352  291455 logs.go:282] 0 containers: []
	W1212 01:37:00.083362  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:00.083369  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:00.083456  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:00.200170  291455 cri.go:89] found id: ""
	I1212 01:37:00.200535  291455 logs.go:282] 0 containers: []
	W1212 01:37:00.200580  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:00.200686  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:00.201034  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:00.291145  291455 cri.go:89] found id: ""
	I1212 01:37:00.291235  291455 logs.go:282] 0 containers: []
	W1212 01:37:00.291284  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:00.291318  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:00.291414  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:00.393558  291455 cri.go:89] found id: ""
	I1212 01:37:00.393606  291455 logs.go:282] 0 containers: []
	W1212 01:37:00.393618  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:00.393626  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:00.393706  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:00.423985  291455 cri.go:89] found id: ""
	I1212 01:37:00.424023  291455 logs.go:282] 0 containers: []
	W1212 01:37:00.424035  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:00.424041  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:00.424117  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:00.451670  291455 cri.go:89] found id: ""
	I1212 01:37:00.451695  291455 logs.go:282] 0 containers: []
	W1212 01:37:00.451705  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:00.451715  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:00.451728  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:00.509577  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:00.509614  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:00.525099  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:00.525133  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:00.635419  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:00.627409    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:00.628095    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:00.629751    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:00.630057    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:00.631588    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:00.627409    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:00.628095    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:00.629751    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:00.630057    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:00.631588    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:00.635455  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:00.635468  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:00.663944  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:00.663984  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:37:02.037994  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:04.536623  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:03.194688  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:03.205352  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:03.205425  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:03.233099  291455 cri.go:89] found id: ""
	I1212 01:37:03.233131  291455 logs.go:282] 0 containers: []
	W1212 01:37:03.233140  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:03.233146  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:03.233217  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:03.257676  291455 cri.go:89] found id: ""
	I1212 01:37:03.257700  291455 logs.go:282] 0 containers: []
	W1212 01:37:03.257710  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:03.257716  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:03.257802  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:03.282622  291455 cri.go:89] found id: ""
	I1212 01:37:03.282696  291455 logs.go:282] 0 containers: []
	W1212 01:37:03.282719  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:03.282739  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:03.282834  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:03.309162  291455 cri.go:89] found id: ""
	I1212 01:37:03.309190  291455 logs.go:282] 0 containers: []
	W1212 01:37:03.309199  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:03.309205  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:03.309265  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:03.334284  291455 cri.go:89] found id: ""
	I1212 01:37:03.334318  291455 logs.go:282] 0 containers: []
	W1212 01:37:03.334327  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:03.334334  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:03.334401  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:03.361255  291455 cri.go:89] found id: ""
	I1212 01:37:03.361281  291455 logs.go:282] 0 containers: []
	W1212 01:37:03.361290  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:03.361296  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:03.361376  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:03.386372  291455 cri.go:89] found id: ""
	I1212 01:37:03.386406  291455 logs.go:282] 0 containers: []
	W1212 01:37:03.386415  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:03.386421  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:03.386490  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:03.412127  291455 cri.go:89] found id: ""
	I1212 01:37:03.412151  291455 logs.go:282] 0 containers: []
	W1212 01:37:03.412160  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:03.412170  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:03.412181  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:03.467933  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:03.467980  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:03.481636  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:03.481663  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:03.565451  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:03.551611    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:03.552450    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:03.553999    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:03.554567    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:03.556109    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:03.551611    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:03.552450    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:03.553999    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:03.554567    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:03.556109    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:03.565476  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:03.565548  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:03.614744  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:03.614783  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:06.159160  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:06.169841  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:06.169916  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:06.196496  291455 cri.go:89] found id: ""
	I1212 01:37:06.196521  291455 logs.go:282] 0 containers: []
	W1212 01:37:06.196529  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:06.196536  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:06.196594  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:06.229404  291455 cri.go:89] found id: ""
	I1212 01:37:06.229429  291455 logs.go:282] 0 containers: []
	W1212 01:37:06.229438  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:06.229444  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:06.229505  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:06.254056  291455 cri.go:89] found id: ""
	I1212 01:37:06.254081  291455 logs.go:282] 0 containers: []
	W1212 01:37:06.254089  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:06.254095  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:06.254154  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:06.278424  291455 cri.go:89] found id: ""
	I1212 01:37:06.278453  291455 logs.go:282] 0 containers: []
	W1212 01:37:06.278462  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:06.278469  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:06.278527  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:06.302517  291455 cri.go:89] found id: ""
	I1212 01:37:06.302545  291455 logs.go:282] 0 containers: []
	W1212 01:37:06.302554  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:06.302560  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:06.302617  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:06.328634  291455 cri.go:89] found id: ""
	I1212 01:37:06.328657  291455 logs.go:282] 0 containers: []
	W1212 01:37:06.328665  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:06.328671  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:06.328728  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	W1212 01:37:07.035836  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:09.035916  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:06.352026  291455 cri.go:89] found id: ""
	I1212 01:37:06.352099  291455 logs.go:282] 0 containers: []
	W1212 01:37:06.352115  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:06.352125  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:06.352199  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:06.376075  291455 cri.go:89] found id: ""
	I1212 01:37:06.376101  291455 logs.go:282] 0 containers: []
	W1212 01:37:06.376110  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:06.376119  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:06.376130  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:06.400451  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:06.400481  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:06.428356  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:06.428385  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:06.484230  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:06.484267  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:06.498047  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:06.498074  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:06.610705  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:06.593235    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:06.594305    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:06.599655    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:06.603092    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:06.603422    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:06.593235    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:06.594305    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:06.599655    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:06.603092    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:06.603422    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:09.111534  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:09.121786  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:09.121855  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:09.148241  291455 cri.go:89] found id: ""
	I1212 01:37:09.148267  291455 logs.go:282] 0 containers: []
	W1212 01:37:09.148275  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:09.148282  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:09.148341  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:09.172742  291455 cri.go:89] found id: ""
	I1212 01:37:09.172764  291455 logs.go:282] 0 containers: []
	W1212 01:37:09.172773  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:09.172779  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:09.172835  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:09.197560  291455 cri.go:89] found id: ""
	I1212 01:37:09.197586  291455 logs.go:282] 0 containers: []
	W1212 01:37:09.197595  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:09.197601  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:09.197673  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:09.222352  291455 cri.go:89] found id: ""
	I1212 01:37:09.222377  291455 logs.go:282] 0 containers: []
	W1212 01:37:09.222386  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:09.222392  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:09.222450  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:09.246770  291455 cri.go:89] found id: ""
	I1212 01:37:09.246794  291455 logs.go:282] 0 containers: []
	W1212 01:37:09.246802  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:09.246809  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:09.246875  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:09.273237  291455 cri.go:89] found id: ""
	I1212 01:37:09.273260  291455 logs.go:282] 0 containers: []
	W1212 01:37:09.273268  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:09.273275  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:09.273342  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:09.298382  291455 cri.go:89] found id: ""
	I1212 01:37:09.298405  291455 logs.go:282] 0 containers: []
	W1212 01:37:09.298414  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:09.298421  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:09.298479  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:09.326366  291455 cri.go:89] found id: ""
	I1212 01:37:09.326388  291455 logs.go:282] 0 containers: []
	W1212 01:37:09.326396  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:09.326405  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:09.326416  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:09.339892  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:09.339920  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:09.408533  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:09.399583    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:09.400465    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:09.402243    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:09.402860    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:09.404361    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:09.399583    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:09.400465    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:09.402243    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:09.402860    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:09.404361    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:09.408555  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:09.408568  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:09.434113  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:09.434149  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:09.469040  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:09.469065  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1212 01:37:11.036562  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:13.536873  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:12.025102  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:12.036649  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:12.036722  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:12.064882  291455 cri.go:89] found id: ""
	I1212 01:37:12.064905  291455 logs.go:282] 0 containers: []
	W1212 01:37:12.064913  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:12.064919  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:12.064979  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:12.090328  291455 cri.go:89] found id: ""
	I1212 01:37:12.090354  291455 logs.go:282] 0 containers: []
	W1212 01:37:12.090362  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:12.090369  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:12.090429  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:12.115640  291455 cri.go:89] found id: ""
	I1212 01:37:12.115665  291455 logs.go:282] 0 containers: []
	W1212 01:37:12.115674  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:12.115680  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:12.115741  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:12.140726  291455 cri.go:89] found id: ""
	I1212 01:37:12.140752  291455 logs.go:282] 0 containers: []
	W1212 01:37:12.140773  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:12.140810  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:12.140900  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:12.165182  291455 cri.go:89] found id: ""
	I1212 01:37:12.165208  291455 logs.go:282] 0 containers: []
	W1212 01:37:12.165216  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:12.165223  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:12.165282  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:12.189365  291455 cri.go:89] found id: ""
	I1212 01:37:12.189389  291455 logs.go:282] 0 containers: []
	W1212 01:37:12.189398  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:12.189405  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:12.189463  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:12.214048  291455 cri.go:89] found id: ""
	I1212 01:37:12.214073  291455 logs.go:282] 0 containers: []
	W1212 01:37:12.214082  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:12.214088  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:12.214148  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:12.240794  291455 cri.go:89] found id: ""
	I1212 01:37:12.240821  291455 logs.go:282] 0 containers: []
	W1212 01:37:12.240830  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:12.240840  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:12.240851  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:12.300894  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:12.300936  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:12.314783  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:12.314817  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:12.382362  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:12.373621    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:12.374371    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:12.376069    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:12.376636    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:12.378249    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:12.373621    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:12.374371    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:12.376069    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:12.376636    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:12.378249    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:12.382385  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:12.382397  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:12.408884  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:12.408921  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:14.444251  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:37:14.509220  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:37:14.509386  291455 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 01:37:14.942929  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:14.953301  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:14.953373  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:14.977865  291455 cri.go:89] found id: ""
	I1212 01:37:14.977933  291455 logs.go:282] 0 containers: []
	W1212 01:37:14.977947  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:14.977954  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:14.978019  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:15.012296  291455 cri.go:89] found id: ""
	I1212 01:37:15.012325  291455 logs.go:282] 0 containers: []
	W1212 01:37:15.012335  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:15.012342  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:15.012414  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:15.044602  291455 cri.go:89] found id: ""
	I1212 01:37:15.044629  291455 logs.go:282] 0 containers: []
	W1212 01:37:15.044638  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:15.044644  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:15.044705  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:15.072008  291455 cri.go:89] found id: ""
	I1212 01:37:15.072035  291455 logs.go:282] 0 containers: []
	W1212 01:37:15.072043  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:15.072049  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:15.072112  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:15.098264  291455 cri.go:89] found id: ""
	I1212 01:37:15.098293  291455 logs.go:282] 0 containers: []
	W1212 01:37:15.098308  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:15.098316  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:15.098390  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:15.124176  291455 cri.go:89] found id: ""
	I1212 01:37:15.124203  291455 logs.go:282] 0 containers: []
	W1212 01:37:15.124212  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:15.124218  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:15.124278  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:15.148763  291455 cri.go:89] found id: ""
	I1212 01:37:15.148788  291455 logs.go:282] 0 containers: []
	W1212 01:37:15.148797  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:15.148803  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:15.148880  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:15.173843  291455 cri.go:89] found id: ""
	I1212 01:37:15.173870  291455 logs.go:282] 0 containers: []
	W1212 01:37:15.173879  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:15.173889  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:15.173901  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:15.203728  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:15.203757  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:15.259019  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:15.259053  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:15.272480  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:15.272509  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:15.337558  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:15.329071    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:15.329763    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:15.331497    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:15.332089    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:15.333695    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:15.329071    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:15.329763    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:15.331497    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:15.332089    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:15.333695    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:15.337580  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:15.337592  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:17.027133  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:37:17.109229  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:37:17.109319  291455 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 01:37:17.112386  291455 out.go:179] * Enabled addons: 
	W1212 01:37:16.035841  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:18.035966  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:20.036082  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:17.115266  291455 addons.go:530] duration metric: took 1m58.649036473s for enable addons: enabled=[]
	I1212 01:37:17.864277  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:17.875687  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:17.875762  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:17.900504  291455 cri.go:89] found id: ""
	I1212 01:37:17.900527  291455 logs.go:282] 0 containers: []
	W1212 01:37:17.900536  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:17.900542  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:17.900626  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:17.925113  291455 cri.go:89] found id: ""
	I1212 01:37:17.925136  291455 logs.go:282] 0 containers: []
	W1212 01:37:17.925145  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:17.925151  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:17.925238  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:17.950585  291455 cri.go:89] found id: ""
	I1212 01:37:17.950611  291455 logs.go:282] 0 containers: []
	W1212 01:37:17.950620  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:17.950626  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:17.950687  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:17.977787  291455 cri.go:89] found id: ""
	I1212 01:37:17.977813  291455 logs.go:282] 0 containers: []
	W1212 01:37:17.977822  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:17.977828  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:17.977888  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:18.006885  291455 cri.go:89] found id: ""
	I1212 01:37:18.006967  291455 logs.go:282] 0 containers: []
	W1212 01:37:18.007019  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:18.007043  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:18.007118  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:18.033137  291455 cri.go:89] found id: ""
	I1212 01:37:18.033161  291455 logs.go:282] 0 containers: []
	W1212 01:37:18.033170  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:18.033176  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:18.033238  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:18.058968  291455 cri.go:89] found id: ""
	I1212 01:37:18.059009  291455 logs.go:282] 0 containers: []
	W1212 01:37:18.059019  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:18.059025  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:18.059087  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:18.084927  291455 cri.go:89] found id: ""
	I1212 01:37:18.084961  291455 logs.go:282] 0 containers: []
	W1212 01:37:18.084971  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:18.084981  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:18.084994  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:18.153070  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:18.145061    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:18.145891    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:18.147207    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:18.147819    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:18.149000    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:18.145061    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:18.145891    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:18.147207    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:18.147819    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:18.149000    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:18.153101  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:18.153113  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:18.178193  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:18.178227  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:18.205844  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:18.205874  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:18.261619  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:18.261657  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:20.775910  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:20.797119  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:20.797192  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:20.870519  291455 cri.go:89] found id: ""
	I1212 01:37:20.870556  291455 logs.go:282] 0 containers: []
	W1212 01:37:20.870566  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:20.870573  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:20.870642  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:20.895021  291455 cri.go:89] found id: ""
	I1212 01:37:20.895044  291455 logs.go:282] 0 containers: []
	W1212 01:37:20.895053  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:20.895059  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:20.895119  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:20.918242  291455 cri.go:89] found id: ""
	I1212 01:37:20.918270  291455 logs.go:282] 0 containers: []
	W1212 01:37:20.918279  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:20.918286  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:20.918340  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:20.942755  291455 cri.go:89] found id: ""
	I1212 01:37:20.942781  291455 logs.go:282] 0 containers: []
	W1212 01:37:20.942790  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:20.942796  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:20.942855  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:20.966487  291455 cri.go:89] found id: ""
	I1212 01:37:20.966551  291455 logs.go:282] 0 containers: []
	W1212 01:37:20.966574  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:20.966595  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:20.966680  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:20.992848  291455 cri.go:89] found id: ""
	I1212 01:37:20.992922  291455 logs.go:282] 0 containers: []
	W1212 01:37:20.992945  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:20.992959  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:20.993035  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:21.025558  291455 cri.go:89] found id: ""
	I1212 01:37:21.025587  291455 logs.go:282] 0 containers: []
	W1212 01:37:21.025596  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:21.025602  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:21.025663  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:21.050967  291455 cri.go:89] found id: ""
	I1212 01:37:21.051023  291455 logs.go:282] 0 containers: []
	W1212 01:37:21.051032  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:21.051041  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:21.051057  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:21.077368  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:21.077396  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:21.133503  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:21.133538  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:21.147218  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:21.147245  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:21.209763  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:21.201479    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:21.202138    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:21.203803    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:21.204409    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:21.205960    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:21.201479    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:21.202138    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:21.203803    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:21.204409    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:21.205960    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:21.209786  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:21.209799  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1212 01:37:22.036593  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:24.536591  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:23.737746  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:23.747983  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:23.748051  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:23.772289  291455 cri.go:89] found id: ""
	I1212 01:37:23.772315  291455 logs.go:282] 0 containers: []
	W1212 01:37:23.772333  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:23.772341  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:23.772420  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:23.848280  291455 cri.go:89] found id: ""
	I1212 01:37:23.848306  291455 logs.go:282] 0 containers: []
	W1212 01:37:23.848315  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:23.848322  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:23.848386  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:23.884675  291455 cri.go:89] found id: ""
	I1212 01:37:23.884700  291455 logs.go:282] 0 containers: []
	W1212 01:37:23.884709  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:23.884715  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:23.884777  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:23.914530  291455 cri.go:89] found id: ""
	I1212 01:37:23.914553  291455 logs.go:282] 0 containers: []
	W1212 01:37:23.914561  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:23.914569  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:23.914626  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:23.940203  291455 cri.go:89] found id: ""
	I1212 01:37:23.940275  291455 logs.go:282] 0 containers: []
	W1212 01:37:23.940292  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:23.940299  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:23.940364  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:23.968920  291455 cri.go:89] found id: ""
	I1212 01:37:23.968944  291455 logs.go:282] 0 containers: []
	W1212 01:37:23.968952  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:23.968959  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:23.969016  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:23.993883  291455 cri.go:89] found id: ""
	I1212 01:37:23.993910  291455 logs.go:282] 0 containers: []
	W1212 01:37:23.993919  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:23.993925  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:23.993985  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:24.019876  291455 cri.go:89] found id: ""
	I1212 01:37:24.019901  291455 logs.go:282] 0 containers: []
	W1212 01:37:24.019909  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:24.019922  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:24.019935  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:24.052560  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:24.052586  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:24.107812  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:24.107847  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:24.121870  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:24.121902  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:24.193432  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:24.184434    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:24.184974    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:24.185943    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:24.187426    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:24.187845    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:24.184434    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:24.184974    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:24.185943    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:24.187426    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:24.187845    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:24.193458  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:24.193471  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1212 01:37:26.536664  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:29.036444  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:26.720901  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:26.732114  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:26.732194  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:26.759421  291455 cri.go:89] found id: ""
	I1212 01:37:26.759443  291455 logs.go:282] 0 containers: []
	W1212 01:37:26.759451  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:26.759458  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:26.759523  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:26.801227  291455 cri.go:89] found id: ""
	I1212 01:37:26.801252  291455 logs.go:282] 0 containers: []
	W1212 01:37:26.801261  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:26.801290  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:26.801371  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:26.836143  291455 cri.go:89] found id: ""
	I1212 01:37:26.836168  291455 logs.go:282] 0 containers: []
	W1212 01:37:26.836178  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:26.836184  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:26.836276  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:26.880334  291455 cri.go:89] found id: ""
	I1212 01:37:26.880373  291455 logs.go:282] 0 containers: []
	W1212 01:37:26.880382  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:26.880388  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:26.880477  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:26.915704  291455 cri.go:89] found id: ""
	I1212 01:37:26.915769  291455 logs.go:282] 0 containers: []
	W1212 01:37:26.915786  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:26.915793  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:26.915864  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:26.943219  291455 cri.go:89] found id: ""
	I1212 01:37:26.943252  291455 logs.go:282] 0 containers: []
	W1212 01:37:26.943262  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:26.943269  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:26.943350  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:26.968790  291455 cri.go:89] found id: ""
	I1212 01:37:26.968867  291455 logs.go:282] 0 containers: []
	W1212 01:37:26.968882  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:26.968889  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:26.968946  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:26.993867  291455 cri.go:89] found id: ""
	I1212 01:37:26.993892  291455 logs.go:282] 0 containers: []
	W1212 01:37:26.993908  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:26.993918  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:26.993929  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:27.025483  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:27.025547  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:27.081672  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:27.081704  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:27.095698  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:27.095724  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:27.161161  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:27.151369    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:27.152034    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:27.153696    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:27.156078    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:27.157312    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:27.151369    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:27.152034    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:27.153696    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:27.156078    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:27.157312    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:27.161189  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:27.161202  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:29.686768  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:29.699055  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:29.699131  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:29.725025  291455 cri.go:89] found id: ""
	I1212 01:37:29.725050  291455 logs.go:282] 0 containers: []
	W1212 01:37:29.725059  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:29.725065  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:29.725140  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:29.749378  291455 cri.go:89] found id: ""
	I1212 01:37:29.749401  291455 logs.go:282] 0 containers: []
	W1212 01:37:29.749410  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:29.749416  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:29.749481  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:29.773953  291455 cri.go:89] found id: ""
	I1212 01:37:29.773978  291455 logs.go:282] 0 containers: []
	W1212 01:37:29.773987  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:29.773993  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:29.774052  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:29.831695  291455 cri.go:89] found id: ""
	I1212 01:37:29.831723  291455 logs.go:282] 0 containers: []
	W1212 01:37:29.831732  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:29.831738  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:29.831794  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:29.881376  291455 cri.go:89] found id: ""
	I1212 01:37:29.881401  291455 logs.go:282] 0 containers: []
	W1212 01:37:29.881412  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:29.881418  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:29.881477  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:29.905463  291455 cri.go:89] found id: ""
	I1212 01:37:29.905497  291455 logs.go:282] 0 containers: []
	W1212 01:37:29.905506  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:29.905530  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:29.905618  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:29.929393  291455 cri.go:89] found id: ""
	I1212 01:37:29.929427  291455 logs.go:282] 0 containers: []
	W1212 01:37:29.929436  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:29.929442  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:29.929507  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:29.956794  291455 cri.go:89] found id: ""
	I1212 01:37:29.956820  291455 logs.go:282] 0 containers: []
	W1212 01:37:29.956829  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:29.956839  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:29.956850  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:29.981845  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:29.981878  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:30.037712  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:30.037751  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:30.096286  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:30.096320  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:30.111120  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:30.111160  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:30.180653  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:30.171653    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:30.172384    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:30.174167    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:30.174765    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:30.176527    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:30.171653    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:30.172384    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:30.174167    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:30.174765    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:30.176527    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1212 01:37:31.535946  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:33.536464  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:32.681768  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:32.693283  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:32.693354  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:32.720606  291455 cri.go:89] found id: ""
	I1212 01:37:32.720629  291455 logs.go:282] 0 containers: []
	W1212 01:37:32.720638  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:32.720644  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:32.720703  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:32.747145  291455 cri.go:89] found id: ""
	I1212 01:37:32.747167  291455 logs.go:282] 0 containers: []
	W1212 01:37:32.747177  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:32.747185  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:32.747243  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:32.772037  291455 cri.go:89] found id: ""
	I1212 01:37:32.772061  291455 logs.go:282] 0 containers: []
	W1212 01:37:32.772070  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:32.772076  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:32.772134  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:32.862885  291455 cri.go:89] found id: ""
	I1212 01:37:32.862910  291455 logs.go:282] 0 containers: []
	W1212 01:37:32.862919  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:32.862925  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:32.862983  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:32.888016  291455 cri.go:89] found id: ""
	I1212 01:37:32.888038  291455 logs.go:282] 0 containers: []
	W1212 01:37:32.888049  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:32.888055  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:32.888115  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:32.912450  291455 cri.go:89] found id: ""
	I1212 01:37:32.912472  291455 logs.go:282] 0 containers: []
	W1212 01:37:32.912481  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:32.912487  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:32.912544  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:32.935759  291455 cri.go:89] found id: ""
	I1212 01:37:32.935781  291455 logs.go:282] 0 containers: []
	W1212 01:37:32.935790  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:32.935797  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:32.935855  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:32.963827  291455 cri.go:89] found id: ""
	I1212 01:37:32.963850  291455 logs.go:282] 0 containers: []
	W1212 01:37:32.963858  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:32.963869  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:32.963880  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:32.988758  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:32.988788  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:33.021942  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:33.021973  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:33.078907  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:33.078940  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:33.094242  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:33.094270  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:33.157981  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:33.149433    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:33.150328    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:33.151907    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:33.152360    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:33.153844    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:33.149433    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:33.150328    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:33.151907    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:33.152360    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:33.153844    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:35.659737  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:35.672022  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:35.672098  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:35.701308  291455 cri.go:89] found id: ""
	I1212 01:37:35.701334  291455 logs.go:282] 0 containers: []
	W1212 01:37:35.701343  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:35.701349  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:35.701408  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:35.726385  291455 cri.go:89] found id: ""
	I1212 01:37:35.726409  291455 logs.go:282] 0 containers: []
	W1212 01:37:35.726418  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:35.726424  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:35.726482  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:35.751557  291455 cri.go:89] found id: ""
	I1212 01:37:35.751593  291455 logs.go:282] 0 containers: []
	W1212 01:37:35.751604  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:35.751610  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:35.751679  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:35.776892  291455 cri.go:89] found id: ""
	I1212 01:37:35.776956  291455 logs.go:282] 0 containers: []
	W1212 01:37:35.776971  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:35.776982  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:35.777044  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:35.824076  291455 cri.go:89] found id: ""
	I1212 01:37:35.824107  291455 logs.go:282] 0 containers: []
	W1212 01:37:35.824116  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:35.824122  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:35.824179  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:35.880084  291455 cri.go:89] found id: ""
	I1212 01:37:35.880107  291455 logs.go:282] 0 containers: []
	W1212 01:37:35.880115  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:35.880122  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:35.880192  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:35.907066  291455 cri.go:89] found id: ""
	I1212 01:37:35.907091  291455 logs.go:282] 0 containers: []
	W1212 01:37:35.907099  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:35.907105  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:35.907166  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:35.936636  291455 cri.go:89] found id: ""
	I1212 01:37:35.936713  291455 logs.go:282] 0 containers: []
	W1212 01:37:35.936729  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:35.936739  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:35.936750  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:35.993085  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:35.993119  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:36.007767  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:36.007856  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:36.076959  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:36.068314    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:36.068888    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:36.070632    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:36.071390    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:36.072929    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:36.068314    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:36.068888    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:36.070632    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:36.071390    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:36.072929    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:36.076984  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:36.076997  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:36.103429  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:36.103463  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:37:36.036277  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:38.536154  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:38.632890  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:38.643831  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:38.643909  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:38.671085  291455 cri.go:89] found id: ""
	I1212 01:37:38.671108  291455 logs.go:282] 0 containers: []
	W1212 01:37:38.671116  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:38.671122  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:38.671182  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:38.694933  291455 cri.go:89] found id: ""
	I1212 01:37:38.694958  291455 logs.go:282] 0 containers: []
	W1212 01:37:38.694966  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:38.694972  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:38.695070  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:38.723033  291455 cri.go:89] found id: ""
	I1212 01:37:38.723060  291455 logs.go:282] 0 containers: []
	W1212 01:37:38.723069  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:38.723075  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:38.723135  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:38.748068  291455 cri.go:89] found id: ""
	I1212 01:37:38.748093  291455 logs.go:282] 0 containers: []
	W1212 01:37:38.748102  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:38.748109  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:38.748169  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:38.778336  291455 cri.go:89] found id: ""
	I1212 01:37:38.778362  291455 logs.go:282] 0 containers: []
	W1212 01:37:38.778371  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:38.778377  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:38.778438  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:38.824425  291455 cri.go:89] found id: ""
	I1212 01:37:38.824452  291455 logs.go:282] 0 containers: []
	W1212 01:37:38.824461  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:38.824468  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:38.824526  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:38.869581  291455 cri.go:89] found id: ""
	I1212 01:37:38.869607  291455 logs.go:282] 0 containers: []
	W1212 01:37:38.869616  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:38.869623  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:38.869684  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:38.898375  291455 cri.go:89] found id: ""
	I1212 01:37:38.898401  291455 logs.go:282] 0 containers: []
	W1212 01:37:38.898411  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:38.898420  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:38.898431  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:38.924559  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:38.924594  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:38.954848  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:38.954884  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:39.010528  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:39.010564  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:39.024383  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:39.024412  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:39.090716  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:39.082311    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:39.082890    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:39.084642    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:39.085084    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:39.086585    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:39.082311    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:39.082890    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:39.084642    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:39.085084    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:39.086585    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1212 01:37:40.536718  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:43.036535  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:45.036776  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:41.591539  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:41.602064  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:41.602135  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:41.626512  291455 cri.go:89] found id: ""
	I1212 01:37:41.626584  291455 logs.go:282] 0 containers: []
	W1212 01:37:41.626609  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:41.626629  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:41.626713  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:41.651218  291455 cri.go:89] found id: ""
	I1212 01:37:41.651294  291455 logs.go:282] 0 containers: []
	W1212 01:37:41.651317  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:41.651339  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:41.651429  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:41.676032  291455 cri.go:89] found id: ""
	I1212 01:37:41.676055  291455 logs.go:282] 0 containers: []
	W1212 01:37:41.676064  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:41.676070  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:41.676144  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:41.700472  291455 cri.go:89] found id: ""
	I1212 01:37:41.700495  291455 logs.go:282] 0 containers: []
	W1212 01:37:41.700509  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:41.700516  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:41.700573  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:41.728292  291455 cri.go:89] found id: ""
	I1212 01:37:41.728317  291455 logs.go:282] 0 containers: []
	W1212 01:37:41.728326  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:41.728332  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:41.728413  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:41.752458  291455 cri.go:89] found id: ""
	I1212 01:37:41.752496  291455 logs.go:282] 0 containers: []
	W1212 01:37:41.752508  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:41.752515  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:41.752687  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:41.778677  291455 cri.go:89] found id: ""
	I1212 01:37:41.778703  291455 logs.go:282] 0 containers: []
	W1212 01:37:41.778711  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:41.778717  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:41.778802  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:41.831103  291455 cri.go:89] found id: ""
	I1212 01:37:41.831129  291455 logs.go:282] 0 containers: []
	W1212 01:37:41.831138  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:41.831147  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:41.831158  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:41.922931  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:41.914201    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:41.914946    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:41.916560    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:41.917145    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:41.918787    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:41.914201    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:41.914946    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:41.916560    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:41.917145    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:41.918787    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:41.922954  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:41.922966  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:41.948574  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:41.948606  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:41.976883  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:41.976910  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:42.031740  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:42.031774  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:44.547156  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:44.557779  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:44.557852  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:44.585516  291455 cri.go:89] found id: ""
	I1212 01:37:44.585539  291455 logs.go:282] 0 containers: []
	W1212 01:37:44.585547  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:44.585554  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:44.585614  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:44.610080  291455 cri.go:89] found id: ""
	I1212 01:37:44.610146  291455 logs.go:282] 0 containers: []
	W1212 01:37:44.610170  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:44.610188  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:44.610282  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:44.634333  291455 cri.go:89] found id: ""
	I1212 01:37:44.634403  291455 logs.go:282] 0 containers: []
	W1212 01:37:44.634428  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:44.634449  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:44.634538  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:44.659415  291455 cri.go:89] found id: ""
	I1212 01:37:44.659441  291455 logs.go:282] 0 containers: []
	W1212 01:37:44.659450  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:44.659457  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:44.659518  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:44.688713  291455 cri.go:89] found id: ""
	I1212 01:37:44.688738  291455 logs.go:282] 0 containers: []
	W1212 01:37:44.688747  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:44.688753  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:44.688813  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:44.713219  291455 cri.go:89] found id: ""
	I1212 01:37:44.713245  291455 logs.go:282] 0 containers: []
	W1212 01:37:44.713262  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:44.713270  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:44.713334  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:44.736447  291455 cri.go:89] found id: ""
	I1212 01:37:44.736472  291455 logs.go:282] 0 containers: []
	W1212 01:37:44.736480  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:44.736486  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:44.736562  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:44.762258  291455 cri.go:89] found id: ""
	I1212 01:37:44.762283  291455 logs.go:282] 0 containers: []
	W1212 01:37:44.762292  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:44.762324  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:44.762341  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:44.839027  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:44.839065  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:44.856616  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:44.856643  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:44.936247  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:44.928242    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:44.928784    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:44.930267    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:44.930803    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:44.932347    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:44.928242    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:44.928784    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:44.930267    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:44.930803    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:44.932347    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:44.936278  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:44.936291  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:44.961626  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:44.961659  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:37:47.536481  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:49.536708  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:47.490976  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:47.501776  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:47.501852  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:47.532240  291455 cri.go:89] found id: ""
	I1212 01:37:47.532263  291455 logs.go:282] 0 containers: []
	W1212 01:37:47.532271  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:47.532276  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:47.532336  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:47.556453  291455 cri.go:89] found id: ""
	I1212 01:37:47.556475  291455 logs.go:282] 0 containers: []
	W1212 01:37:47.556484  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:47.556490  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:47.556551  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:47.580605  291455 cri.go:89] found id: ""
	I1212 01:37:47.580628  291455 logs.go:282] 0 containers: []
	W1212 01:37:47.580637  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:47.580643  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:47.580709  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:47.605106  291455 cri.go:89] found id: ""
	I1212 01:37:47.605130  291455 logs.go:282] 0 containers: []
	W1212 01:37:47.605139  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:47.605145  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:47.605224  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:47.630587  291455 cri.go:89] found id: ""
	I1212 01:37:47.630613  291455 logs.go:282] 0 containers: []
	W1212 01:37:47.630622  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:47.630629  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:47.630733  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:47.656391  291455 cri.go:89] found id: ""
	I1212 01:37:47.656416  291455 logs.go:282] 0 containers: []
	W1212 01:37:47.656424  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:47.656431  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:47.656489  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:47.680787  291455 cri.go:89] found id: ""
	I1212 01:37:47.680817  291455 logs.go:282] 0 containers: []
	W1212 01:37:47.680826  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:47.680832  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:47.680913  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:47.706371  291455 cri.go:89] found id: ""
	I1212 01:37:47.706396  291455 logs.go:282] 0 containers: []
	W1212 01:37:47.706405  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:47.706414  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:47.706458  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:47.763648  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:47.763687  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:47.777355  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:47.777383  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:47.899204  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:47.891161    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:47.891855    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:47.893228    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:47.893728    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:47.895403    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:47.891161    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:47.891855    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:47.893228    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:47.893728    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:47.895403    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:47.899226  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:47.899238  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:47.924220  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:47.924256  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:50.458301  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:50.468856  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:50.468926  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:50.493349  291455 cri.go:89] found id: ""
	I1212 01:37:50.493374  291455 logs.go:282] 0 containers: []
	W1212 01:37:50.493382  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:50.493388  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:50.493445  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:50.517926  291455 cri.go:89] found id: ""
	I1212 01:37:50.517951  291455 logs.go:282] 0 containers: []
	W1212 01:37:50.517960  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:50.517966  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:50.518026  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:50.546779  291455 cri.go:89] found id: ""
	I1212 01:37:50.546805  291455 logs.go:282] 0 containers: []
	W1212 01:37:50.546814  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:50.546819  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:50.546877  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:50.572059  291455 cri.go:89] found id: ""
	I1212 01:37:50.572086  291455 logs.go:282] 0 containers: []
	W1212 01:37:50.572102  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:50.572110  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:50.572173  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:50.596562  291455 cri.go:89] found id: ""
	I1212 01:37:50.596585  291455 logs.go:282] 0 containers: []
	W1212 01:37:50.596594  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:50.596601  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:50.596669  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:50.621102  291455 cri.go:89] found id: ""
	I1212 01:37:50.621124  291455 logs.go:282] 0 containers: []
	W1212 01:37:50.621132  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:50.621138  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:50.621196  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:50.645424  291455 cri.go:89] found id: ""
	I1212 01:37:50.645445  291455 logs.go:282] 0 containers: []
	W1212 01:37:50.645454  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:50.645461  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:50.645521  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:50.670456  291455 cri.go:89] found id: ""
	I1212 01:37:50.670479  291455 logs.go:282] 0 containers: []
	W1212 01:37:50.670487  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:50.670497  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:50.670508  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:50.726487  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:50.726519  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:50.740149  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:50.740178  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:50.846147  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:50.836239    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:50.837070    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:50.839024    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:50.839387    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:50.840598    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:50.836239    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:50.837070    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:50.839024    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:50.839387    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:50.840598    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:50.846174  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:50.846188  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:50.882509  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:50.882583  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:37:52.036566  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:54.036621  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:53.411213  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:53.421355  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:53.421422  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:53.444104  291455 cri.go:89] found id: ""
	I1212 01:37:53.444130  291455 logs.go:282] 0 containers: []
	W1212 01:37:53.444139  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:53.444146  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:53.444205  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:53.467938  291455 cri.go:89] found id: ""
	I1212 01:37:53.467963  291455 logs.go:282] 0 containers: []
	W1212 01:37:53.467972  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:53.467979  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:53.468038  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:53.492082  291455 cri.go:89] found id: ""
	I1212 01:37:53.492106  291455 logs.go:282] 0 containers: []
	W1212 01:37:53.492115  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:53.492122  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:53.492180  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:53.516011  291455 cri.go:89] found id: ""
	I1212 01:37:53.516040  291455 logs.go:282] 0 containers: []
	W1212 01:37:53.516049  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:53.516056  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:53.516115  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:53.543513  291455 cri.go:89] found id: ""
	I1212 01:37:53.543550  291455 logs.go:282] 0 containers: []
	W1212 01:37:53.543559  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:53.543565  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:53.543707  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:53.568681  291455 cri.go:89] found id: ""
	I1212 01:37:53.568705  291455 logs.go:282] 0 containers: []
	W1212 01:37:53.568713  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:53.568720  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:53.568797  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:53.593562  291455 cri.go:89] found id: ""
	I1212 01:37:53.593587  291455 logs.go:282] 0 containers: []
	W1212 01:37:53.593596  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:53.593602  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:53.593676  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:53.617634  291455 cri.go:89] found id: ""
	I1212 01:37:53.617658  291455 logs.go:282] 0 containers: []
	W1212 01:37:53.617667  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:53.617677  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:53.617691  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:53.672956  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:53.672991  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:53.686739  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:53.686767  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:53.753435  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:53.745274    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:53.746109    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:53.747777    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:53.748302    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:53.749767    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:53.745274    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:53.746109    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:53.747777    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:53.748302    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:53.749767    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:53.753456  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:53.753470  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:53.785303  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:53.785347  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:37:56.536427  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:59.036479  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:56.343327  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:56.353619  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:56.353686  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:56.377008  291455 cri.go:89] found id: ""
	I1212 01:37:56.377032  291455 logs.go:282] 0 containers: []
	W1212 01:37:56.377040  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:56.377047  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:56.377103  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:56.403572  291455 cri.go:89] found id: ""
	I1212 01:37:56.403599  291455 logs.go:282] 0 containers: []
	W1212 01:37:56.403607  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:56.403614  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:56.403677  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:56.427234  291455 cri.go:89] found id: ""
	I1212 01:37:56.427256  291455 logs.go:282] 0 containers: []
	W1212 01:37:56.427266  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:56.427272  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:56.427329  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:56.450300  291455 cri.go:89] found id: ""
	I1212 01:37:56.450325  291455 logs.go:282] 0 containers: []
	W1212 01:37:56.450334  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:56.450340  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:56.450399  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:56.478269  291455 cri.go:89] found id: ""
	I1212 01:37:56.478293  291455 logs.go:282] 0 containers: []
	W1212 01:37:56.478302  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:56.478308  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:56.478402  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:56.502839  291455 cri.go:89] found id: ""
	I1212 01:37:56.502863  291455 logs.go:282] 0 containers: []
	W1212 01:37:56.502872  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:56.502879  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:56.502939  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:56.528770  291455 cri.go:89] found id: ""
	I1212 01:37:56.528796  291455 logs.go:282] 0 containers: []
	W1212 01:37:56.528804  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:56.528810  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:56.528886  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:56.552625  291455 cri.go:89] found id: ""
	I1212 01:37:56.552687  291455 logs.go:282] 0 containers: []
	W1212 01:37:56.552701  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:56.552710  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:56.552722  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:56.582901  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:56.582929  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:56.638758  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:56.638790  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:56.652337  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:56.652364  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:56.718815  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:56.710468    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:56.711245    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:56.712862    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:56.713372    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:56.714933    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:56.710468    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:56.711245    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:56.712862    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:56.713372    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:56.714933    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:56.718853  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:56.718866  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:59.245105  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:59.255232  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:59.255300  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:59.280996  291455 cri.go:89] found id: ""
	I1212 01:37:59.281018  291455 logs.go:282] 0 containers: []
	W1212 01:37:59.281027  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:59.281033  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:59.281089  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:59.306870  291455 cri.go:89] found id: ""
	I1212 01:37:59.306893  291455 logs.go:282] 0 containers: []
	W1212 01:37:59.306901  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:59.306908  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:59.306967  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:59.332982  291455 cri.go:89] found id: ""
	I1212 01:37:59.333008  291455 logs.go:282] 0 containers: []
	W1212 01:37:59.333017  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:59.333022  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:59.333128  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:59.360799  291455 cri.go:89] found id: ""
	I1212 01:37:59.360824  291455 logs.go:282] 0 containers: []
	W1212 01:37:59.360833  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:59.360839  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:59.360897  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:59.383773  291455 cri.go:89] found id: ""
	I1212 01:37:59.383836  291455 logs.go:282] 0 containers: []
	W1212 01:37:59.383851  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:59.383858  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:59.383916  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:59.411933  291455 cri.go:89] found id: ""
	I1212 01:37:59.411958  291455 logs.go:282] 0 containers: []
	W1212 01:37:59.411966  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:59.411973  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:59.412073  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:59.437061  291455 cri.go:89] found id: ""
	I1212 01:37:59.437087  291455 logs.go:282] 0 containers: []
	W1212 01:37:59.437095  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:59.437102  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:59.437182  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:59.461853  291455 cri.go:89] found id: ""
	I1212 01:37:59.461877  291455 logs.go:282] 0 containers: []
	W1212 01:37:59.461886  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:59.461895  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:59.461907  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:59.493084  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:59.493111  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:59.549198  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:59.549229  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:59.562644  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:59.562674  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:59.627349  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:59.619195    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:59.619835    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:59.621508    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:59.622053    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:59.623671    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:59.619195    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:59.619835    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:59.621508    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:59.622053    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:59.623671    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:59.627373  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:59.627388  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1212 01:38:01.535866  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:03.536428  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:38:02.153040  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:02.163386  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:02.163465  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:02.188022  291455 cri.go:89] found id: ""
	I1212 01:38:02.188050  291455 logs.go:282] 0 containers: []
	W1212 01:38:02.188058  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:02.188064  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:02.188126  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:02.212051  291455 cri.go:89] found id: ""
	I1212 01:38:02.212088  291455 logs.go:282] 0 containers: []
	W1212 01:38:02.212097  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:02.212104  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:02.212163  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:02.236784  291455 cri.go:89] found id: ""
	I1212 01:38:02.236815  291455 logs.go:282] 0 containers: []
	W1212 01:38:02.236824  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:02.236831  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:02.236895  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:02.262277  291455 cri.go:89] found id: ""
	I1212 01:38:02.262301  291455 logs.go:282] 0 containers: []
	W1212 01:38:02.262310  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:02.262316  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:02.262375  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:02.286641  291455 cri.go:89] found id: ""
	I1212 01:38:02.286665  291455 logs.go:282] 0 containers: []
	W1212 01:38:02.286674  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:02.286680  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:02.286739  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:02.315696  291455 cri.go:89] found id: ""
	I1212 01:38:02.315721  291455 logs.go:282] 0 containers: []
	W1212 01:38:02.315729  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:02.315736  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:02.315796  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:02.341469  291455 cri.go:89] found id: ""
	I1212 01:38:02.341495  291455 logs.go:282] 0 containers: []
	W1212 01:38:02.341504  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:02.341511  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:02.341578  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:02.375601  291455 cri.go:89] found id: ""
	I1212 01:38:02.375626  291455 logs.go:282] 0 containers: []
	W1212 01:38:02.375634  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:02.375644  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:02.375656  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:02.388949  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:02.388978  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:02.458902  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:02.448758    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:02.449311    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:02.452630    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:02.453261    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:02.454829    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:02.448758    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:02.449311    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:02.452630    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:02.453261    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:02.454829    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:02.458924  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:02.458936  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:02.485359  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:02.485393  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:02.512676  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:02.512746  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:05.069728  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:05.084872  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:05.084975  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:05.130414  291455 cri.go:89] found id: ""
	I1212 01:38:05.130441  291455 logs.go:282] 0 containers: []
	W1212 01:38:05.130450  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:05.130457  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:05.130524  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:05.156129  291455 cri.go:89] found id: ""
	I1212 01:38:05.156154  291455 logs.go:282] 0 containers: []
	W1212 01:38:05.156163  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:05.156169  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:05.156230  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:05.182033  291455 cri.go:89] found id: ""
	I1212 01:38:05.182056  291455 logs.go:282] 0 containers: []
	W1212 01:38:05.182065  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:05.182071  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:05.182131  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:05.206795  291455 cri.go:89] found id: ""
	I1212 01:38:05.206821  291455 logs.go:282] 0 containers: []
	W1212 01:38:05.206830  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:05.206842  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:05.206903  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:05.231972  291455 cri.go:89] found id: ""
	I1212 01:38:05.231998  291455 logs.go:282] 0 containers: []
	W1212 01:38:05.232008  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:05.232014  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:05.232075  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:05.257476  291455 cri.go:89] found id: ""
	I1212 01:38:05.257501  291455 logs.go:282] 0 containers: []
	W1212 01:38:05.257509  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:05.257515  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:05.257576  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:05.282557  291455 cri.go:89] found id: ""
	I1212 01:38:05.282581  291455 logs.go:282] 0 containers: []
	W1212 01:38:05.282590  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:05.282595  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:05.282655  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:05.306866  291455 cri.go:89] found id: ""
	I1212 01:38:05.306891  291455 logs.go:282] 0 containers: []
	W1212 01:38:05.306899  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:05.306908  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:05.306919  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:05.363028  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:05.363073  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:05.376693  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:05.376722  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:05.445040  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:05.435873    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:05.436618    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:05.438470    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:05.439137    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:05.440737    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:05.435873    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:05.436618    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:05.438470    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:05.439137    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:05.440737    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:05.445059  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:05.445071  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:05.470893  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:05.470933  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:38:05.536804  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:08.035822  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:10.036632  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:38:08.000563  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:08.015628  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:08.015701  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:08.081620  291455 cri.go:89] found id: ""
	I1212 01:38:08.081643  291455 logs.go:282] 0 containers: []
	W1212 01:38:08.081652  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:08.081661  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:08.081736  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:08.129116  291455 cri.go:89] found id: ""
	I1212 01:38:08.129137  291455 logs.go:282] 0 containers: []
	W1212 01:38:08.129146  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:08.129152  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:08.129208  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:08.154760  291455 cri.go:89] found id: ""
	I1212 01:38:08.154781  291455 logs.go:282] 0 containers: []
	W1212 01:38:08.154790  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:08.154797  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:08.154853  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:08.181948  291455 cri.go:89] found id: ""
	I1212 01:38:08.181971  291455 logs.go:282] 0 containers: []
	W1212 01:38:08.181981  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:08.181988  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:08.182052  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:08.206310  291455 cri.go:89] found id: ""
	I1212 01:38:08.206335  291455 logs.go:282] 0 containers: []
	W1212 01:38:08.206345  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:08.206351  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:08.206413  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:08.230579  291455 cri.go:89] found id: ""
	I1212 01:38:08.230606  291455 logs.go:282] 0 containers: []
	W1212 01:38:08.230615  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:08.230624  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:08.230690  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:08.259888  291455 cri.go:89] found id: ""
	I1212 01:38:08.259913  291455 logs.go:282] 0 containers: []
	W1212 01:38:08.259922  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:08.259928  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:08.260006  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:08.284903  291455 cri.go:89] found id: ""
	I1212 01:38:08.284927  291455 logs.go:282] 0 containers: []
	W1212 01:38:08.284936  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:08.284945  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:08.284957  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:08.341529  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:08.341565  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:08.355353  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:08.355394  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:08.418766  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:08.409488    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:08.410375    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:08.412414    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:08.413281    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:08.414948    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:08.409488    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:08.410375    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:08.412414    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:08.413281    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:08.414948    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:08.418789  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:08.418801  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:08.444616  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:08.444654  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:10.972656  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:10.983126  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:10.983206  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:11.011272  291455 cri.go:89] found id: ""
	I1212 01:38:11.011296  291455 logs.go:282] 0 containers: []
	W1212 01:38:11.011305  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:11.011311  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:11.011372  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:11.061173  291455 cri.go:89] found id: ""
	I1212 01:38:11.061199  291455 logs.go:282] 0 containers: []
	W1212 01:38:11.061208  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:11.061214  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:11.061273  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:11.124035  291455 cri.go:89] found id: ""
	I1212 01:38:11.124061  291455 logs.go:282] 0 containers: []
	W1212 01:38:11.124070  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:11.124077  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:11.124144  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:11.152861  291455 cri.go:89] found id: ""
	I1212 01:38:11.152900  291455 logs.go:282] 0 containers: []
	W1212 01:38:11.152910  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:11.152932  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:11.153005  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:11.178248  291455 cri.go:89] found id: ""
	I1212 01:38:11.178270  291455 logs.go:282] 0 containers: []
	W1212 01:38:11.178279  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:11.178285  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:11.178355  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:11.213235  291455 cri.go:89] found id: ""
	I1212 01:38:11.213260  291455 logs.go:282] 0 containers: []
	W1212 01:38:11.213269  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:11.213275  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:11.213337  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:11.238933  291455 cri.go:89] found id: ""
	I1212 01:38:11.238960  291455 logs.go:282] 0 containers: []
	W1212 01:38:11.238969  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:11.238975  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:11.239060  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:11.264115  291455 cri.go:89] found id: ""
	I1212 01:38:11.264137  291455 logs.go:282] 0 containers: []
	W1212 01:38:11.264146  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:11.264155  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:11.264167  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:11.320523  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:11.320561  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:11.334027  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:11.334059  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:12.036672  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:14.536663  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:11.411780  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:11.403056    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:11.403575    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:11.405319    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:11.405839    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:11.407505    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:11.403056    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:11.403575    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:11.405319    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:11.405839    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:11.407505    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:11.411803  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:11.411815  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:11.437459  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:11.437498  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:13.966371  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:13.976737  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:13.976807  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:14.002889  291455 cri.go:89] found id: ""
	I1212 01:38:14.002926  291455 logs.go:282] 0 containers: []
	W1212 01:38:14.002936  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:14.002943  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:14.003051  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:14.028607  291455 cri.go:89] found id: ""
	I1212 01:38:14.028632  291455 logs.go:282] 0 containers: []
	W1212 01:38:14.028640  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:14.028647  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:14.028707  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:14.068137  291455 cri.go:89] found id: ""
	I1212 01:38:14.068159  291455 logs.go:282] 0 containers: []
	W1212 01:38:14.068168  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:14.068174  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:14.068236  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:14.114047  291455 cri.go:89] found id: ""
	I1212 01:38:14.114068  291455 logs.go:282] 0 containers: []
	W1212 01:38:14.114077  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:14.114083  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:14.114142  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:14.143724  291455 cri.go:89] found id: ""
	I1212 01:38:14.143751  291455 logs.go:282] 0 containers: []
	W1212 01:38:14.143760  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:14.143766  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:14.143837  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:14.172821  291455 cri.go:89] found id: ""
	I1212 01:38:14.172844  291455 logs.go:282] 0 containers: []
	W1212 01:38:14.172853  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:14.172860  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:14.172922  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:14.201404  291455 cri.go:89] found id: ""
	I1212 01:38:14.201428  291455 logs.go:282] 0 containers: []
	W1212 01:38:14.201437  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:14.201443  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:14.201502  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:14.225421  291455 cri.go:89] found id: ""
	I1212 01:38:14.225445  291455 logs.go:282] 0 containers: []
	W1212 01:38:14.225454  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:14.225464  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:14.225475  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:14.281620  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:14.281655  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:14.295270  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:14.295297  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:14.361558  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:14.353174    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:14.353959    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:14.355541    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:14.356054    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:14.357617    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:14.353174    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:14.353959    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:14.355541    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:14.356054    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:14.357617    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:14.361580  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:14.361594  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:14.387622  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:14.387657  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:38:17.036493  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:19.535924  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:38:16.917930  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:16.928677  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:16.928747  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:16.956782  291455 cri.go:89] found id: ""
	I1212 01:38:16.956805  291455 logs.go:282] 0 containers: []
	W1212 01:38:16.956815  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:16.956821  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:16.956882  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:16.982223  291455 cri.go:89] found id: ""
	I1212 01:38:16.982255  291455 logs.go:282] 0 containers: []
	W1212 01:38:16.982264  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:16.982270  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:16.982337  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:17.011072  291455 cri.go:89] found id: ""
	I1212 01:38:17.011097  291455 logs.go:282] 0 containers: []
	W1212 01:38:17.011107  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:17.011114  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:17.011191  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:17.052070  291455 cri.go:89] found id: ""
	I1212 01:38:17.052096  291455 logs.go:282] 0 containers: []
	W1212 01:38:17.052104  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:17.052110  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:17.052177  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:17.084107  291455 cri.go:89] found id: ""
	I1212 01:38:17.084141  291455 logs.go:282] 0 containers: []
	W1212 01:38:17.084151  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:17.084157  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:17.084224  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:17.122692  291455 cri.go:89] found id: ""
	I1212 01:38:17.122766  291455 logs.go:282] 0 containers: []
	W1212 01:38:17.122797  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:17.122817  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:17.122923  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:17.156006  291455 cri.go:89] found id: ""
	I1212 01:38:17.156081  291455 logs.go:282] 0 containers: []
	W1212 01:38:17.156109  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:17.156129  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:17.156241  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:17.182169  291455 cri.go:89] found id: ""
	I1212 01:38:17.182240  291455 logs.go:282] 0 containers: []
	W1212 01:38:17.182264  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:17.182285  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:17.182335  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:17.237895  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:17.237933  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:17.252584  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:17.252654  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:17.321480  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:17.312815    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:17.313531    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:17.315204    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:17.315765    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:17.317270    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:17.312815    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:17.313531    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:17.315204    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:17.315765    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:17.317270    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:17.321502  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:17.321515  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:17.347596  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:17.347629  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:19.879967  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:19.890396  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:19.890464  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:19.918925  291455 cri.go:89] found id: ""
	I1212 01:38:19.918949  291455 logs.go:282] 0 containers: []
	W1212 01:38:19.918958  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:19.918964  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:19.919053  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:19.943584  291455 cri.go:89] found id: ""
	I1212 01:38:19.943610  291455 logs.go:282] 0 containers: []
	W1212 01:38:19.943619  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:19.943626  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:19.943681  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:19.969048  291455 cri.go:89] found id: ""
	I1212 01:38:19.969068  291455 logs.go:282] 0 containers: []
	W1212 01:38:19.969077  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:19.969083  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:19.969144  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:20.003773  291455 cri.go:89] found id: ""
	I1212 01:38:20.003795  291455 logs.go:282] 0 containers: []
	W1212 01:38:20.003804  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:20.003821  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:20.003894  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:20.066569  291455 cri.go:89] found id: ""
	I1212 01:38:20.066593  291455 logs.go:282] 0 containers: []
	W1212 01:38:20.066602  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:20.066608  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:20.066672  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:20.123787  291455 cri.go:89] found id: ""
	I1212 01:38:20.123818  291455 logs.go:282] 0 containers: []
	W1212 01:38:20.123828  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:20.123835  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:20.123902  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:20.148942  291455 cri.go:89] found id: ""
	I1212 01:38:20.148967  291455 logs.go:282] 0 containers: []
	W1212 01:38:20.148976  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:20.148982  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:20.149040  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:20.174974  291455 cri.go:89] found id: ""
	I1212 01:38:20.175019  291455 logs.go:282] 0 containers: []
	W1212 01:38:20.175028  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:20.175037  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:20.175049  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:20.188705  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:20.188734  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:20.257975  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:20.247998    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:20.248900    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:20.250615    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:20.251381    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:20.253188    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:20.247998    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:20.248900    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:20.250615    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:20.251381    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:20.253188    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:20.258004  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:20.258018  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:20.283558  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:20.283589  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:20.313552  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:20.313580  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1212 01:38:21.535995  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:23.536531  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:38:22.869782  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:22.880016  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:22.880091  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:22.903866  291455 cri.go:89] found id: ""
	I1212 01:38:22.903891  291455 logs.go:282] 0 containers: []
	W1212 01:38:22.903901  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:22.903908  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:22.903971  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:22.927721  291455 cri.go:89] found id: ""
	I1212 01:38:22.927744  291455 logs.go:282] 0 containers: []
	W1212 01:38:22.927752  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:22.927759  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:22.927816  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:22.952423  291455 cri.go:89] found id: ""
	I1212 01:38:22.952447  291455 logs.go:282] 0 containers: []
	W1212 01:38:22.952455  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:22.952461  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:22.952517  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:22.976598  291455 cri.go:89] found id: ""
	I1212 01:38:22.976620  291455 logs.go:282] 0 containers: []
	W1212 01:38:22.976628  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:22.976634  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:22.976691  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:23.003885  291455 cri.go:89] found id: ""
	I1212 01:38:23.003919  291455 logs.go:282] 0 containers: []
	W1212 01:38:23.003939  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:23.003947  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:23.004046  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:23.033013  291455 cri.go:89] found id: ""
	I1212 01:38:23.033036  291455 logs.go:282] 0 containers: []
	W1212 01:38:23.033045  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:23.033052  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:23.033112  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:23.092706  291455 cri.go:89] found id: ""
	I1212 01:38:23.092730  291455 logs.go:282] 0 containers: []
	W1212 01:38:23.092739  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:23.092745  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:23.092802  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:23.133640  291455 cri.go:89] found id: ""
	I1212 01:38:23.133668  291455 logs.go:282] 0 containers: []
	W1212 01:38:23.133676  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:23.133686  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:23.133697  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:23.196413  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:23.196452  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:23.209608  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:23.209634  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:23.275524  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:23.267738    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:23.268351    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:23.269907    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:23.270261    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:23.271739    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:23.267738    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:23.268351    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:23.269907    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:23.270261    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:23.271739    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:23.275547  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:23.275559  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:23.300618  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:23.300651  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:25.829093  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:25.839308  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:25.839392  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:25.862901  291455 cri.go:89] found id: ""
	I1212 01:38:25.862927  291455 logs.go:282] 0 containers: []
	W1212 01:38:25.862936  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:25.862942  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:25.863050  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:25.886878  291455 cri.go:89] found id: ""
	I1212 01:38:25.886912  291455 logs.go:282] 0 containers: []
	W1212 01:38:25.886921  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:25.886927  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:25.887012  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:25.912760  291455 cri.go:89] found id: ""
	I1212 01:38:25.912782  291455 logs.go:282] 0 containers: []
	W1212 01:38:25.912791  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:25.912799  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:25.912867  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:25.937385  291455 cri.go:89] found id: ""
	I1212 01:38:25.937409  291455 logs.go:282] 0 containers: []
	W1212 01:38:25.937418  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:25.937424  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:25.937482  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:25.961635  291455 cri.go:89] found id: ""
	I1212 01:38:25.961659  291455 logs.go:282] 0 containers: []
	W1212 01:38:25.961668  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:25.961674  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:25.961736  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:25.984780  291455 cri.go:89] found id: ""
	I1212 01:38:25.984804  291455 logs.go:282] 0 containers: []
	W1212 01:38:25.984814  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:25.984821  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:25.984886  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:26.013891  291455 cri.go:89] found id: ""
	I1212 01:38:26.013918  291455 logs.go:282] 0 containers: []
	W1212 01:38:26.013927  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:26.013933  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:26.013995  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:26.058178  291455 cri.go:89] found id: ""
	I1212 01:38:26.058203  291455 logs.go:282] 0 containers: []
	W1212 01:38:26.058212  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:26.058222  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:26.058233  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:26.145226  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:26.145265  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:26.159401  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:26.159430  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:26.224696  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:26.216061    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:26.217085    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:26.217937    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:26.219401    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:26.219913    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:26.216061    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:26.217085    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:26.217937    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:26.219401    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:26.219913    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:26.224716  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:26.224727  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:26.249818  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:26.249853  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:38:25.536763  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:28.036701  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:30.036797  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:38:28.780686  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:28.791844  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:28.791927  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:28.820089  291455 cri.go:89] found id: ""
	I1212 01:38:28.820114  291455 logs.go:282] 0 containers: []
	W1212 01:38:28.820123  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:28.820129  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:28.820187  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:28.844073  291455 cri.go:89] found id: ""
	I1212 01:38:28.844097  291455 logs.go:282] 0 containers: []
	W1212 01:38:28.844106  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:28.844115  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:28.844173  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:28.874510  291455 cri.go:89] found id: ""
	I1212 01:38:28.874535  291455 logs.go:282] 0 containers: []
	W1212 01:38:28.874544  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:28.874550  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:28.874609  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:28.899593  291455 cri.go:89] found id: ""
	I1212 01:38:28.899667  291455 logs.go:282] 0 containers: []
	W1212 01:38:28.899683  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:28.899691  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:28.899749  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:28.923958  291455 cri.go:89] found id: ""
	I1212 01:38:28.923981  291455 logs.go:282] 0 containers: []
	W1212 01:38:28.923990  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:28.923996  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:28.924058  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:28.949188  291455 cri.go:89] found id: ""
	I1212 01:38:28.949217  291455 logs.go:282] 0 containers: []
	W1212 01:38:28.949225  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:28.949231  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:28.949307  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:28.974943  291455 cri.go:89] found id: ""
	I1212 01:38:28.974968  291455 logs.go:282] 0 containers: []
	W1212 01:38:28.974976  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:28.974982  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:28.975062  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:29.004380  291455 cri.go:89] found id: ""
	I1212 01:38:29.004475  291455 logs.go:282] 0 containers: []
	W1212 01:38:29.004501  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:29.004542  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:29.004572  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:29.021785  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:29.021856  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:29.143333  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:29.134378    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:29.134910    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:29.137306    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:29.137843    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:29.139511    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:29.134378    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:29.134910    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:29.137306    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:29.137843    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:29.139511    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:29.143354  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:29.143366  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:29.168668  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:29.168699  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:29.197133  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:29.197159  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1212 01:38:32.536552  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:35.039253  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:38:31.753888  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:31.765059  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:31.765150  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:31.790319  291455 cri.go:89] found id: ""
	I1212 01:38:31.790342  291455 logs.go:282] 0 containers: []
	W1212 01:38:31.790350  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:31.790357  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:31.790415  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:31.815400  291455 cri.go:89] found id: ""
	I1212 01:38:31.815424  291455 logs.go:282] 0 containers: []
	W1212 01:38:31.815434  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:31.815441  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:31.815502  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:31.840194  291455 cri.go:89] found id: ""
	I1212 01:38:31.840217  291455 logs.go:282] 0 containers: []
	W1212 01:38:31.840226  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:31.840231  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:31.840291  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:31.867911  291455 cri.go:89] found id: ""
	I1212 01:38:31.867935  291455 logs.go:282] 0 containers: []
	W1212 01:38:31.867943  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:31.867949  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:31.868008  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:31.892198  291455 cri.go:89] found id: ""
	I1212 01:38:31.892222  291455 logs.go:282] 0 containers: []
	W1212 01:38:31.892230  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:31.892238  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:31.892296  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:31.916890  291455 cri.go:89] found id: ""
	I1212 01:38:31.916914  291455 logs.go:282] 0 containers: []
	W1212 01:38:31.916923  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:31.916929  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:31.916988  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:31.942060  291455 cri.go:89] found id: ""
	I1212 01:38:31.942085  291455 logs.go:282] 0 containers: []
	W1212 01:38:31.942095  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:31.942102  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:31.942160  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:31.968817  291455 cri.go:89] found id: ""
	I1212 01:38:31.968839  291455 logs.go:282] 0 containers: []
	W1212 01:38:31.968848  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:31.968857  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:31.968871  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:31.997201  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:31.997227  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:32.062907  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:32.062945  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:32.079848  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:32.079874  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:32.172399  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:32.162924    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:32.163521    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:32.165105    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:32.165573    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:32.167197    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:32.162924    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:32.163521    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:32.165105    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:32.165573    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:32.167197    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:32.172421  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:32.172433  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:34.699204  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:34.710589  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:34.710660  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:34.734740  291455 cri.go:89] found id: ""
	I1212 01:38:34.734767  291455 logs.go:282] 0 containers: []
	W1212 01:38:34.734776  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:34.734782  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:34.734841  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:34.759636  291455 cri.go:89] found id: ""
	I1212 01:38:34.759659  291455 logs.go:282] 0 containers: []
	W1212 01:38:34.759667  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:34.759679  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:34.759739  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:34.785220  291455 cri.go:89] found id: ""
	I1212 01:38:34.785255  291455 logs.go:282] 0 containers: []
	W1212 01:38:34.785265  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:34.785271  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:34.785341  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:34.814480  291455 cri.go:89] found id: ""
	I1212 01:38:34.814502  291455 logs.go:282] 0 containers: []
	W1212 01:38:34.814510  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:34.814516  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:34.814580  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:34.840740  291455 cri.go:89] found id: ""
	I1212 01:38:34.840774  291455 logs.go:282] 0 containers: []
	W1212 01:38:34.840784  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:34.840790  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:34.840872  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:34.868875  291455 cri.go:89] found id: ""
	I1212 01:38:34.868898  291455 logs.go:282] 0 containers: []
	W1212 01:38:34.868907  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:34.868913  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:34.868973  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:34.897841  291455 cri.go:89] found id: ""
	I1212 01:38:34.897864  291455 logs.go:282] 0 containers: []
	W1212 01:38:34.897873  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:34.897879  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:34.897937  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:34.921846  291455 cri.go:89] found id: ""
	I1212 01:38:34.921869  291455 logs.go:282] 0 containers: []
	W1212 01:38:34.921877  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:34.921886  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:34.921897  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:34.935038  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:34.935066  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:35.007684  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:34.997327    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:34.997746    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:34.999039    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:34.999714    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:35.001615    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:34.997327    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:34.997746    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:34.999039    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:34.999714    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:35.001615    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:35.007755  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:35.007775  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:35.034750  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:35.034794  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:35.089747  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:35.089777  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1212 01:38:37.536673  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:39.543660  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:38:37.657148  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:37.668842  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:37.668917  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:37.696665  291455 cri.go:89] found id: ""
	I1212 01:38:37.696699  291455 logs.go:282] 0 containers: []
	W1212 01:38:37.696708  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:37.696720  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:37.696777  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:37.728956  291455 cri.go:89] found id: ""
	I1212 01:38:37.728979  291455 logs.go:282] 0 containers: []
	W1212 01:38:37.728987  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:37.728993  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:37.729058  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:37.753296  291455 cri.go:89] found id: ""
	I1212 01:38:37.753324  291455 logs.go:282] 0 containers: []
	W1212 01:38:37.753334  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:37.753340  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:37.753397  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:37.778445  291455 cri.go:89] found id: ""
	I1212 01:38:37.778471  291455 logs.go:282] 0 containers: []
	W1212 01:38:37.778481  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:37.778490  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:37.778548  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:37.807550  291455 cri.go:89] found id: ""
	I1212 01:38:37.807572  291455 logs.go:282] 0 containers: []
	W1212 01:38:37.807580  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:37.807587  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:37.807649  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:37.832292  291455 cri.go:89] found id: ""
	I1212 01:38:37.832315  291455 logs.go:282] 0 containers: []
	W1212 01:38:37.832323  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:37.832329  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:37.832386  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:37.856566  291455 cri.go:89] found id: ""
	I1212 01:38:37.856588  291455 logs.go:282] 0 containers: []
	W1212 01:38:37.856597  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:37.856602  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:37.856660  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:37.880677  291455 cri.go:89] found id: ""
	I1212 01:38:37.880741  291455 logs.go:282] 0 containers: []
	W1212 01:38:37.880766  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:37.880789  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:37.880820  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:37.910870  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:37.910908  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:37.938485  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:37.938520  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:37.993961  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:37.993995  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:38.010371  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:38.010404  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:38.096529  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:38.085475    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:38.086344    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:38.088104    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:38.088451    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:38.092325    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:38.085475    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:38.086344    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:38.088104    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:38.088451    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:38.092325    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:40.598418  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:40.609775  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:40.609847  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:40.635651  291455 cri.go:89] found id: ""
	I1212 01:38:40.635677  291455 logs.go:282] 0 containers: []
	W1212 01:38:40.635686  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:40.635693  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:40.635757  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:40.660863  291455 cri.go:89] found id: ""
	I1212 01:38:40.660889  291455 logs.go:282] 0 containers: []
	W1212 01:38:40.660898  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:40.660905  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:40.660966  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:40.685941  291455 cri.go:89] found id: ""
	I1212 01:38:40.686012  291455 logs.go:282] 0 containers: []
	W1212 01:38:40.686053  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:40.686078  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:40.686166  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:40.711525  291455 cri.go:89] found id: ""
	I1212 01:38:40.711554  291455 logs.go:282] 0 containers: []
	W1212 01:38:40.711563  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:40.711569  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:40.711630  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:40.737721  291455 cri.go:89] found id: ""
	I1212 01:38:40.737795  291455 logs.go:282] 0 containers: []
	W1212 01:38:40.737816  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:40.737836  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:40.737927  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:40.761337  291455 cri.go:89] found id: ""
	I1212 01:38:40.761402  291455 logs.go:282] 0 containers: []
	W1212 01:38:40.761424  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:40.761442  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:40.761525  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:40.786163  291455 cri.go:89] found id: ""
	I1212 01:38:40.786239  291455 logs.go:282] 0 containers: []
	W1212 01:38:40.786264  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:40.786285  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:40.786412  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:40.810546  291455 cri.go:89] found id: ""
	I1212 01:38:40.810610  291455 logs.go:282] 0 containers: []
	W1212 01:38:40.810634  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:40.810655  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:40.810694  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:40.866283  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:40.866320  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:40.879799  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:40.879834  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:40.945902  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:40.937611    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:40.938411    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:40.939975    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:40.940544    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:40.942091    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:40.937611    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:40.938411    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:40.939975    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:40.940544    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:40.942091    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:40.945925  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:40.945938  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:40.971267  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:40.971302  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:38:42.036561  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:44.536569  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:38:43.502022  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:43.513782  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:43.513855  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:43.538026  291455 cri.go:89] found id: ""
	I1212 01:38:43.538047  291455 logs.go:282] 0 containers: []
	W1212 01:38:43.538055  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:43.538060  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:43.538117  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:43.562296  291455 cri.go:89] found id: ""
	I1212 01:38:43.562320  291455 logs.go:282] 0 containers: []
	W1212 01:38:43.562329  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:43.562335  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:43.562399  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:43.585964  291455 cri.go:89] found id: ""
	I1212 01:38:43.585986  291455 logs.go:282] 0 containers: []
	W1212 01:38:43.585995  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:43.586001  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:43.586056  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:43.609636  291455 cri.go:89] found id: ""
	I1212 01:38:43.609658  291455 logs.go:282] 0 containers: []
	W1212 01:38:43.609666  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:43.609672  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:43.609729  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:43.634822  291455 cri.go:89] found id: ""
	I1212 01:38:43.634843  291455 logs.go:282] 0 containers: []
	W1212 01:38:43.634852  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:43.634857  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:43.634916  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:43.659517  291455 cri.go:89] found id: ""
	I1212 01:38:43.659539  291455 logs.go:282] 0 containers: []
	W1212 01:38:43.659553  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:43.659560  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:43.659619  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:43.684416  291455 cri.go:89] found id: ""
	I1212 01:38:43.684471  291455 logs.go:282] 0 containers: []
	W1212 01:38:43.684486  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:43.684493  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:43.684557  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:43.708909  291455 cri.go:89] found id: ""
	I1212 01:38:43.708931  291455 logs.go:282] 0 containers: []
	W1212 01:38:43.708939  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:43.708949  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:43.708961  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:43.764034  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:43.764069  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:43.778276  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:43.778304  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:43.849112  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:43.839330    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:43.839703    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:43.842808    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:43.843485    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:43.845319    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:43.839330    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:43.839703    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:43.842808    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:43.843485    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:43.845319    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:43.849132  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:43.849144  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:43.874790  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:43.874823  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:38:47.036537  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:49.536417  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:38:46.404666  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:46.415686  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:46.415772  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:46.446409  291455 cri.go:89] found id: ""
	I1212 01:38:46.446436  291455 logs.go:282] 0 containers: []
	W1212 01:38:46.446445  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:46.446452  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:46.446517  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:46.481137  291455 cri.go:89] found id: ""
	I1212 01:38:46.481160  291455 logs.go:282] 0 containers: []
	W1212 01:38:46.481169  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:46.481175  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:46.481258  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:46.506866  291455 cri.go:89] found id: ""
	I1212 01:38:46.506892  291455 logs.go:282] 0 containers: []
	W1212 01:38:46.506902  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:46.506908  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:46.506964  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:46.535109  291455 cri.go:89] found id: ""
	I1212 01:38:46.535185  291455 logs.go:282] 0 containers: []
	W1212 01:38:46.535208  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:46.535228  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:46.535312  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:46.559379  291455 cri.go:89] found id: ""
	I1212 01:38:46.559402  291455 logs.go:282] 0 containers: []
	W1212 01:38:46.559410  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:46.559417  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:46.559478  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:46.583642  291455 cri.go:89] found id: ""
	I1212 01:38:46.583717  291455 logs.go:282] 0 containers: []
	W1212 01:38:46.583738  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:46.583758  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:46.583842  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:46.608474  291455 cri.go:89] found id: ""
	I1212 01:38:46.608541  291455 logs.go:282] 0 containers: []
	W1212 01:38:46.608563  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:46.608578  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:46.608652  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:46.632905  291455 cri.go:89] found id: ""
	I1212 01:38:46.632982  291455 logs.go:282] 0 containers: []
	W1212 01:38:46.632997  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:46.633007  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:46.633018  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:46.689011  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:46.689048  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:46.702565  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:46.702592  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:46.772610  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:46.763145    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:46.764149    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:46.764820    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:46.766385    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:46.766678    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:46.763145    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:46.764149    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:46.764820    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:46.766385    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:46.766678    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:46.772629  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:46.772643  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:46.797690  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:46.797725  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:49.328051  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:49.341287  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:49.341360  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:49.378113  291455 cri.go:89] found id: ""
	I1212 01:38:49.378135  291455 logs.go:282] 0 containers: []
	W1212 01:38:49.378143  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:49.378149  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:49.378210  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:49.404269  291455 cri.go:89] found id: ""
	I1212 01:38:49.404291  291455 logs.go:282] 0 containers: []
	W1212 01:38:49.404300  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:49.404306  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:49.404364  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:49.428783  291455 cri.go:89] found id: ""
	I1212 01:38:49.428809  291455 logs.go:282] 0 containers: []
	W1212 01:38:49.428819  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:49.428825  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:49.428884  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:49.453856  291455 cri.go:89] found id: ""
	I1212 01:38:49.453889  291455 logs.go:282] 0 containers: []
	W1212 01:38:49.453898  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:49.453905  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:49.453965  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:49.480403  291455 cri.go:89] found id: ""
	I1212 01:38:49.480428  291455 logs.go:282] 0 containers: []
	W1212 01:38:49.480439  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:49.480445  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:49.480502  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:49.505527  291455 cri.go:89] found id: ""
	I1212 01:38:49.505594  291455 logs.go:282] 0 containers: []
	W1212 01:38:49.505617  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:49.505644  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:49.505740  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:49.529450  291455 cri.go:89] found id: ""
	I1212 01:38:49.529474  291455 logs.go:282] 0 containers: []
	W1212 01:38:49.529483  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:49.529489  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:49.529546  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:49.554349  291455 cri.go:89] found id: ""
	I1212 01:38:49.554412  291455 logs.go:282] 0 containers: []
	W1212 01:38:49.554435  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:49.554465  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:49.554493  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:49.611773  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:49.611805  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:49.625145  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:49.625169  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:49.689186  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:49.680639    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:49.681463    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:49.683157    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:49.683640    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:49.685287    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:49.680639    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:49.681463    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:49.683157    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:49.683640    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:49.685287    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:49.689208  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:49.689220  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:49.715241  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:49.715275  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:38:51.536523  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:53.536619  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:38:52.245578  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:52.255964  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:52.256032  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:52.288234  291455 cri.go:89] found id: ""
	I1212 01:38:52.288273  291455 logs.go:282] 0 containers: []
	W1212 01:38:52.288281  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:52.288287  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:52.288362  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:52.361726  291455 cri.go:89] found id: ""
	I1212 01:38:52.361756  291455 logs.go:282] 0 containers: []
	W1212 01:38:52.361765  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:52.361772  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:52.361848  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:52.390222  291455 cri.go:89] found id: ""
	I1212 01:38:52.390248  291455 logs.go:282] 0 containers: []
	W1212 01:38:52.390257  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:52.390262  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:52.390320  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:52.415677  291455 cri.go:89] found id: ""
	I1212 01:38:52.415712  291455 logs.go:282] 0 containers: []
	W1212 01:38:52.415721  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:52.415728  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:52.415796  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:52.440412  291455 cri.go:89] found id: ""
	I1212 01:38:52.440435  291455 logs.go:282] 0 containers: []
	W1212 01:38:52.440444  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:52.440450  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:52.440508  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:52.464172  291455 cri.go:89] found id: ""
	I1212 01:38:52.464203  291455 logs.go:282] 0 containers: []
	W1212 01:38:52.464212  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:52.464219  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:52.464278  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:52.496050  291455 cri.go:89] found id: ""
	I1212 01:38:52.496075  291455 logs.go:282] 0 containers: []
	W1212 01:38:52.496083  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:52.496089  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:52.496147  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:52.525249  291455 cri.go:89] found id: ""
	I1212 01:38:52.525271  291455 logs.go:282] 0 containers: []
	W1212 01:38:52.525279  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:52.525288  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:52.525299  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:52.580198  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:52.580233  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:52.593582  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:52.593648  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:52.659167  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:52.650803    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:52.651520    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:52.653182    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:52.653702    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:52.655438    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:52.650803    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:52.651520    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:52.653182    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:52.653702    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:52.655438    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:52.659187  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:52.659199  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:52.685268  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:52.685300  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:55.219025  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:55.229148  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:55.229222  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:55.252977  291455 cri.go:89] found id: ""
	I1212 01:38:55.253051  291455 logs.go:282] 0 containers: []
	W1212 01:38:55.253066  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:55.253077  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:55.253140  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:55.276881  291455 cri.go:89] found id: ""
	I1212 01:38:55.276945  291455 logs.go:282] 0 containers: []
	W1212 01:38:55.276959  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:55.276966  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:55.277024  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:55.316321  291455 cri.go:89] found id: ""
	I1212 01:38:55.316355  291455 logs.go:282] 0 containers: []
	W1212 01:38:55.316364  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:55.316370  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:55.316447  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:55.355675  291455 cri.go:89] found id: ""
	I1212 01:38:55.355703  291455 logs.go:282] 0 containers: []
	W1212 01:38:55.355711  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:55.355717  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:55.355791  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:55.394580  291455 cri.go:89] found id: ""
	I1212 01:38:55.394607  291455 logs.go:282] 0 containers: []
	W1212 01:38:55.394615  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:55.394621  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:55.394693  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:55.423340  291455 cri.go:89] found id: ""
	I1212 01:38:55.423363  291455 logs.go:282] 0 containers: []
	W1212 01:38:55.423371  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:55.423378  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:55.423436  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:55.447512  291455 cri.go:89] found id: ""
	I1212 01:38:55.447536  291455 logs.go:282] 0 containers: []
	W1212 01:38:55.447544  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:55.447550  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:55.447610  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:55.470830  291455 cri.go:89] found id: ""
	I1212 01:38:55.470853  291455 logs.go:282] 0 containers: []
	W1212 01:38:55.470867  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:55.470876  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:55.470886  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:55.528525  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:55.528561  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:55.541815  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:55.541843  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:55.605253  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:55.596889    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:55.597592    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:55.599233    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:55.599799    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:55.601358    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:55.596889    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:55.597592    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:55.599233    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:55.599799    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:55.601358    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:55.605280  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:55.605292  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:55.631237  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:55.631267  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:38:55.536688  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:58.036700  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:38:58.158753  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:58.169462  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:58.169546  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:58.194075  291455 cri.go:89] found id: ""
	I1212 01:38:58.194096  291455 logs.go:282] 0 containers: []
	W1212 01:38:58.194105  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:58.194111  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:58.194171  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:58.218468  291455 cri.go:89] found id: ""
	I1212 01:38:58.218546  291455 logs.go:282] 0 containers: []
	W1212 01:38:58.218569  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:58.218590  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:58.218675  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:58.242950  291455 cri.go:89] found id: ""
	I1212 01:38:58.242973  291455 logs.go:282] 0 containers: []
	W1212 01:38:58.242981  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:58.242987  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:58.243142  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:58.269403  291455 cri.go:89] found id: ""
	I1212 01:38:58.269423  291455 logs.go:282] 0 containers: []
	W1212 01:38:58.269432  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:58.269439  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:58.269502  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:58.317022  291455 cri.go:89] found id: ""
	I1212 01:38:58.317044  291455 logs.go:282] 0 containers: []
	W1212 01:38:58.317054  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:58.317059  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:58.317117  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:58.373414  291455 cri.go:89] found id: ""
	I1212 01:38:58.373486  291455 logs.go:282] 0 containers: []
	W1212 01:38:58.373511  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:58.373531  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:58.373619  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:58.404516  291455 cri.go:89] found id: ""
	I1212 01:38:58.404583  291455 logs.go:282] 0 containers: []
	W1212 01:38:58.404597  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:58.404604  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:58.404663  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:58.433096  291455 cri.go:89] found id: ""
	I1212 01:38:58.433120  291455 logs.go:282] 0 containers: []
	W1212 01:38:58.433131  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:58.433141  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:58.433170  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:58.495200  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:58.486845    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:58.487734    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:58.489310    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:58.489623    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:58.491296    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:58.486845    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:58.487734    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:58.489310    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:58.489623    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:58.491296    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:58.495223  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:58.495237  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:58.520595  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:58.520626  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:58.547636  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:58.547664  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:58.603945  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:58.603979  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:01.119071  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:01.130124  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:01.130196  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:01.155700  291455 cri.go:89] found id: ""
	I1212 01:39:01.155725  291455 logs.go:282] 0 containers: []
	W1212 01:39:01.155733  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:01.155740  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:01.155799  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:01.183985  291455 cri.go:89] found id: ""
	I1212 01:39:01.184012  291455 logs.go:282] 0 containers: []
	W1212 01:39:01.184021  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:01.184028  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:01.184095  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:01.211713  291455 cri.go:89] found id: ""
	I1212 01:39:01.211740  291455 logs.go:282] 0 containers: []
	W1212 01:39:01.211749  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:01.211756  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:01.211817  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:01.238159  291455 cri.go:89] found id: ""
	I1212 01:39:01.238185  291455 logs.go:282] 0 containers: []
	W1212 01:39:01.238195  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:01.238201  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:01.238265  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:01.264520  291455 cri.go:89] found id: ""
	I1212 01:39:01.264544  291455 logs.go:282] 0 containers: []
	W1212 01:39:01.264553  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:01.264560  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:01.264618  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:01.320162  291455 cri.go:89] found id: ""
	I1212 01:39:01.320191  291455 logs.go:282] 0 containers: []
	W1212 01:39:01.320200  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:01.320207  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:01.320276  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	W1212 01:39:00.536335  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:39:02.536671  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:39:05.036449  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:39:01.367993  291455 cri.go:89] found id: ""
	I1212 01:39:01.368020  291455 logs.go:282] 0 containers: []
	W1212 01:39:01.368029  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:01.368037  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:01.368107  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:01.395205  291455 cri.go:89] found id: ""
	I1212 01:39:01.395230  291455 logs.go:282] 0 containers: []
	W1212 01:39:01.395239  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:01.395248  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:01.395260  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:01.450970  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:01.451049  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:01.464511  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:01.464540  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:01.529452  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:01.521771    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:01.522386    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:01.523907    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:01.524217    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:01.525703    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:01.521771    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:01.522386    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:01.523907    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:01.524217    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:01.525703    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:01.529472  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:01.529484  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:01.553702  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:01.553734  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:04.082286  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:04.093237  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:04.093313  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:04.118261  291455 cri.go:89] found id: ""
	I1212 01:39:04.118283  291455 logs.go:282] 0 containers: []
	W1212 01:39:04.118292  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:04.118298  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:04.118360  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:04.147714  291455 cri.go:89] found id: ""
	I1212 01:39:04.147736  291455 logs.go:282] 0 containers: []
	W1212 01:39:04.147745  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:04.147751  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:04.147815  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:04.172999  291455 cri.go:89] found id: ""
	I1212 01:39:04.173023  291455 logs.go:282] 0 containers: []
	W1212 01:39:04.173032  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:04.173039  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:04.173101  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:04.197081  291455 cri.go:89] found id: ""
	I1212 01:39:04.197103  291455 logs.go:282] 0 containers: []
	W1212 01:39:04.197111  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:04.197119  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:04.197176  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:04.220639  291455 cri.go:89] found id: ""
	I1212 01:39:04.220665  291455 logs.go:282] 0 containers: []
	W1212 01:39:04.220674  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:04.220681  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:04.220746  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:04.248901  291455 cri.go:89] found id: ""
	I1212 01:39:04.248926  291455 logs.go:282] 0 containers: []
	W1212 01:39:04.248935  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:04.248944  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:04.249011  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:04.274064  291455 cri.go:89] found id: ""
	I1212 01:39:04.274085  291455 logs.go:282] 0 containers: []
	W1212 01:39:04.274093  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:04.274099  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:04.274161  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:04.332510  291455 cri.go:89] found id: ""
	I1212 01:39:04.332535  291455 logs.go:282] 0 containers: []
	W1212 01:39:04.332545  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:04.332555  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:04.332572  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:04.368151  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:04.368189  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:04.403091  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:04.403118  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:04.459000  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:04.459031  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:04.472281  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:04.472306  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:04.534979  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:04.526363    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:04.527054    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:04.528724    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:04.529233    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:04.530692    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:04.526363    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:04.527054    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:04.528724    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:04.529233    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:04.530692    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1212 01:39:07.036549  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:39:09.036731  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:39:07.035447  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:07.046244  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:07.046313  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:07.072737  291455 cri.go:89] found id: ""
	I1212 01:39:07.072761  291455 logs.go:282] 0 containers: []
	W1212 01:39:07.072770  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:07.072776  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:07.072835  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:07.097400  291455 cri.go:89] found id: ""
	I1212 01:39:07.097423  291455 logs.go:282] 0 containers: []
	W1212 01:39:07.097431  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:07.097438  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:07.097496  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:07.121464  291455 cri.go:89] found id: ""
	I1212 01:39:07.121486  291455 logs.go:282] 0 containers: []
	W1212 01:39:07.121495  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:07.121501  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:07.121584  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:07.145780  291455 cri.go:89] found id: ""
	I1212 01:39:07.145800  291455 logs.go:282] 0 containers: []
	W1212 01:39:07.145808  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:07.145814  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:07.145870  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:07.169997  291455 cri.go:89] found id: ""
	I1212 01:39:07.170018  291455 logs.go:282] 0 containers: []
	W1212 01:39:07.170027  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:07.170033  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:07.170091  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:07.195061  291455 cri.go:89] found id: ""
	I1212 01:39:07.195088  291455 logs.go:282] 0 containers: []
	W1212 01:39:07.195096  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:07.195103  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:07.195161  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:07.220294  291455 cri.go:89] found id: ""
	I1212 01:39:07.220317  291455 logs.go:282] 0 containers: []
	W1212 01:39:07.220325  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:07.220331  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:07.220389  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:07.245551  291455 cri.go:89] found id: ""
	I1212 01:39:07.245576  291455 logs.go:282] 0 containers: []
	W1212 01:39:07.245586  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:07.245595  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:07.245607  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:07.277493  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:07.277521  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:07.344946  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:07.347238  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:07.376690  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:07.376714  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:07.447695  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:07.438862    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:07.439591    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:07.441334    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:07.441943    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:07.443673    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:07.438862    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:07.439591    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:07.441334    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:07.441943    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:07.443673    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:07.447717  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:07.447730  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:09.974214  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:09.987839  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:09.987921  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:10.025371  291455 cri.go:89] found id: ""
	I1212 01:39:10.025397  291455 logs.go:282] 0 containers: []
	W1212 01:39:10.025407  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:10.025413  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:10.025477  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:10.051333  291455 cri.go:89] found id: ""
	I1212 01:39:10.051357  291455 logs.go:282] 0 containers: []
	W1212 01:39:10.051366  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:10.051371  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:10.051436  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:10.075263  291455 cri.go:89] found id: ""
	I1212 01:39:10.075289  291455 logs.go:282] 0 containers: []
	W1212 01:39:10.075298  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:10.075305  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:10.075364  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:10.103331  291455 cri.go:89] found id: ""
	I1212 01:39:10.103355  291455 logs.go:282] 0 containers: []
	W1212 01:39:10.103364  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:10.103370  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:10.103431  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:10.128706  291455 cri.go:89] found id: ""
	I1212 01:39:10.128730  291455 logs.go:282] 0 containers: []
	W1212 01:39:10.128739  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:10.128746  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:10.128802  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:10.154605  291455 cri.go:89] found id: ""
	I1212 01:39:10.154627  291455 logs.go:282] 0 containers: []
	W1212 01:39:10.154637  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:10.154644  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:10.154703  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:10.179767  291455 cri.go:89] found id: ""
	I1212 01:39:10.179791  291455 logs.go:282] 0 containers: []
	W1212 01:39:10.179800  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:10.179806  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:10.179864  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:10.208346  291455 cri.go:89] found id: ""
	I1212 01:39:10.208369  291455 logs.go:282] 0 containers: []
	W1212 01:39:10.208376  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:10.208386  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:10.208397  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:10.263848  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:10.263883  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:10.279969  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:10.279994  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:10.405176  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:10.396616    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:10.397197    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:10.398853    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:10.399595    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:10.401217    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:10.396616    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:10.397197    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:10.398853    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:10.399595    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:10.401217    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:10.405198  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:10.405210  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:10.431360  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:10.431398  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:39:11.536529  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:39:13.536580  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:39:12.959344  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:12.971541  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:12.971628  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:13.006786  291455 cri.go:89] found id: ""
	I1212 01:39:13.006815  291455 logs.go:282] 0 containers: []
	W1212 01:39:13.006824  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:13.006830  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:13.006903  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:13.032106  291455 cri.go:89] found id: ""
	I1212 01:39:13.032127  291455 logs.go:282] 0 containers: []
	W1212 01:39:13.032135  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:13.032141  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:13.032200  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:13.057432  291455 cri.go:89] found id: ""
	I1212 01:39:13.057454  291455 logs.go:282] 0 containers: []
	W1212 01:39:13.057463  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:13.057469  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:13.057529  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:13.082502  291455 cri.go:89] found id: ""
	I1212 01:39:13.082524  291455 logs.go:282] 0 containers: []
	W1212 01:39:13.082532  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:13.082538  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:13.082595  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:13.108199  291455 cri.go:89] found id: ""
	I1212 01:39:13.108272  291455 logs.go:282] 0 containers: []
	W1212 01:39:13.108295  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:13.108323  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:13.108433  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:13.134284  291455 cri.go:89] found id: ""
	I1212 01:39:13.134356  291455 logs.go:282] 0 containers: []
	W1212 01:39:13.134379  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:13.134398  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:13.134485  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:13.159517  291455 cri.go:89] found id: ""
	I1212 01:39:13.159541  291455 logs.go:282] 0 containers: []
	W1212 01:39:13.159550  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:13.159556  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:13.159614  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:13.183175  291455 cri.go:89] found id: ""
	I1212 01:39:13.183199  291455 logs.go:282] 0 containers: []
	W1212 01:39:13.183207  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:13.183216  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:13.183232  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:13.241174  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:13.241210  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:13.254849  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:13.254880  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:13.381552  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:13.373347    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:13.373888    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:13.375400    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:13.375820    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:13.377000    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:13.373347    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:13.373888    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:13.375400    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:13.375820    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:13.377000    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:13.381573  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:13.381586  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:13.406354  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:13.406385  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:15.933099  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:15.943596  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:15.943674  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:15.966960  291455 cri.go:89] found id: ""
	I1212 01:39:15.967014  291455 logs.go:282] 0 containers: []
	W1212 01:39:15.967023  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:15.967030  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:15.967090  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:15.996145  291455 cri.go:89] found id: ""
	I1212 01:39:15.996167  291455 logs.go:282] 0 containers: []
	W1212 01:39:15.996175  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:15.996182  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:15.996239  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:16.025152  291455 cri.go:89] found id: ""
	I1212 01:39:16.025175  291455 logs.go:282] 0 containers: []
	W1212 01:39:16.025183  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:16.025191  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:16.025248  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:16.050231  291455 cri.go:89] found id: ""
	I1212 01:39:16.050264  291455 logs.go:282] 0 containers: []
	W1212 01:39:16.050273  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:16.050279  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:16.050345  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:16.076929  291455 cri.go:89] found id: ""
	I1212 01:39:16.076958  291455 logs.go:282] 0 containers: []
	W1212 01:39:16.076967  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:16.076975  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:16.077054  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:16.102241  291455 cri.go:89] found id: ""
	I1212 01:39:16.102273  291455 logs.go:282] 0 containers: []
	W1212 01:39:16.102282  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:16.102304  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:16.102383  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:16.126239  291455 cri.go:89] found id: ""
	I1212 01:39:16.126302  291455 logs.go:282] 0 containers: []
	W1212 01:39:16.126324  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:16.126344  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:16.126417  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:16.151645  291455 cri.go:89] found id: ""
	I1212 01:39:16.151674  291455 logs.go:282] 0 containers: []
	W1212 01:39:16.151683  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:16.151692  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:16.151702  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:16.176852  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:16.176882  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:16.206720  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:16.206746  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:16.262653  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:16.262686  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:16.275603  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:16.275634  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:16.035987  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:39:18.036847  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:39:18.536213  287206 node_ready.go:38] duration metric: took 6m0.000908955s for node "no-preload-361053" to be "Ready" ...
	I1212 01:39:18.539274  287206 out.go:203] 
	W1212 01:39:18.542145  287206 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1212 01:39:18.542166  287206 out.go:285] * 
	W1212 01:39:18.544311  287206 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 01:39:18.547291  287206 out.go:203] 
	W1212 01:39:16.359325  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:16.351492    8667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:16.351974    8667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:16.353266    8667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:16.353666    8667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:16.355275    8667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:16.351492    8667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:16.351974    8667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:16.353266    8667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:16.353666    8667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:16.355275    8667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:18.859963  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:18.870960  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:18.871050  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:18.910477  291455 cri.go:89] found id: ""
	I1212 01:39:18.910504  291455 logs.go:282] 0 containers: []
	W1212 01:39:18.910513  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:18.910519  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:18.910580  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:18.935189  291455 cri.go:89] found id: ""
	I1212 01:39:18.935212  291455 logs.go:282] 0 containers: []
	W1212 01:39:18.935221  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:18.935226  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:18.935282  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:18.960848  291455 cri.go:89] found id: ""
	I1212 01:39:18.960874  291455 logs.go:282] 0 containers: []
	W1212 01:39:18.960883  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:18.960888  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:18.960945  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:18.999545  291455 cri.go:89] found id: ""
	I1212 01:39:18.999572  291455 logs.go:282] 0 containers: []
	W1212 01:39:18.999581  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:18.999594  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:18.999657  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:19.037306  291455 cri.go:89] found id: ""
	I1212 01:39:19.037333  291455 logs.go:282] 0 containers: []
	W1212 01:39:19.037341  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:19.037347  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:19.037405  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:19.076075  291455 cri.go:89] found id: ""
	I1212 01:39:19.076096  291455 logs.go:282] 0 containers: []
	W1212 01:39:19.076104  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:19.076114  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:19.076168  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:19.106494  291455 cri.go:89] found id: ""
	I1212 01:39:19.106515  291455 logs.go:282] 0 containers: []
	W1212 01:39:19.106524  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:19.106529  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:19.106586  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:19.133049  291455 cri.go:89] found id: ""
	I1212 01:39:19.133073  291455 logs.go:282] 0 containers: []
	W1212 01:39:19.133082  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:19.133090  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:19.133105  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:19.218096  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:19.208102    8760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:19.208898    8760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:19.210671    8760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:19.211009    8760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:19.214074    8760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:19.208102    8760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:19.208898    8760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:19.210671    8760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:19.211009    8760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:19.214074    8760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:19.218119  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:19.218140  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:19.246120  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:19.246155  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:19.279088  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:19.279116  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:19.436253  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:19.436340  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:21.952490  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:21.962606  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:21.962676  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:21.986826  291455 cri.go:89] found id: ""
	I1212 01:39:21.986851  291455 logs.go:282] 0 containers: []
	W1212 01:39:21.986859  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:21.986866  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:21.986923  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:22.014517  291455 cri.go:89] found id: ""
	I1212 01:39:22.014541  291455 logs.go:282] 0 containers: []
	W1212 01:39:22.014551  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:22.014557  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:22.014623  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:22.041526  291455 cri.go:89] found id: ""
	I1212 01:39:22.041552  291455 logs.go:282] 0 containers: []
	W1212 01:39:22.041561  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:22.041568  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:22.041633  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:22.067041  291455 cri.go:89] found id: ""
	I1212 01:39:22.067069  291455 logs.go:282] 0 containers: []
	W1212 01:39:22.067079  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:22.067086  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:22.067149  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:22.092937  291455 cri.go:89] found id: ""
	I1212 01:39:22.092973  291455 logs.go:282] 0 containers: []
	W1212 01:39:22.092982  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:22.092988  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:22.093059  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:22.122005  291455 cri.go:89] found id: ""
	I1212 01:39:22.122031  291455 logs.go:282] 0 containers: []
	W1212 01:39:22.122039  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:22.122045  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:22.122107  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:22.147474  291455 cri.go:89] found id: ""
	I1212 01:39:22.147500  291455 logs.go:282] 0 containers: []
	W1212 01:39:22.147508  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:22.147515  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:22.147577  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:22.177172  291455 cri.go:89] found id: ""
	I1212 01:39:22.177199  291455 logs.go:282] 0 containers: []
	W1212 01:39:22.177208  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:22.177219  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:22.177231  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:22.234049  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:22.234083  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:22.247594  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:22.247619  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:22.368443  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:22.359792    8879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:22.360617    8879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:22.362109    8879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:22.362602    8879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:22.364143    8879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:22.359792    8879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:22.360617    8879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:22.362109    8879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:22.362602    8879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:22.364143    8879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:22.368462  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:22.368485  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:22.393929  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:22.393963  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:24.924468  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:24.934611  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:24.934679  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:24.960488  291455 cri.go:89] found id: ""
	I1212 01:39:24.960510  291455 logs.go:282] 0 containers: []
	W1212 01:39:24.960519  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:24.960524  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:24.960580  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:24.985199  291455 cri.go:89] found id: ""
	I1212 01:39:24.985222  291455 logs.go:282] 0 containers: []
	W1212 01:39:24.985231  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:24.985238  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:24.985295  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:25.017557  291455 cri.go:89] found id: ""
	I1212 01:39:25.017583  291455 logs.go:282] 0 containers: []
	W1212 01:39:25.017594  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:25.017601  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:25.017673  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:25.043724  291455 cri.go:89] found id: ""
	I1212 01:39:25.043756  291455 logs.go:282] 0 containers: []
	W1212 01:39:25.043766  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:25.043773  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:25.043836  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:25.068913  291455 cri.go:89] found id: ""
	I1212 01:39:25.068941  291455 logs.go:282] 0 containers: []
	W1212 01:39:25.068951  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:25.068958  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:25.069021  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:25.094251  291455 cri.go:89] found id: ""
	I1212 01:39:25.094274  291455 logs.go:282] 0 containers: []
	W1212 01:39:25.094282  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:25.094288  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:25.094347  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:25.118452  291455 cri.go:89] found id: ""
	I1212 01:39:25.118530  291455 logs.go:282] 0 containers: []
	W1212 01:39:25.118554  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:25.118575  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:25.118691  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:25.143548  291455 cri.go:89] found id: ""
	I1212 01:39:25.143571  291455 logs.go:282] 0 containers: []
	W1212 01:39:25.143584  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:25.143594  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:25.143605  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:25.201626  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:25.201662  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:25.214871  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:25.214900  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:25.278860  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:25.271096    8993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:25.271605    8993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:25.273123    8993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:25.273537    8993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:25.275035    8993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:25.271096    8993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:25.271605    8993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:25.273123    8993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:25.273537    8993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:25.275035    8993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:25.278890  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:25.278903  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:25.313862  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:25.313902  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:27.877952  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:27.888461  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:27.888534  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:27.912285  291455 cri.go:89] found id: ""
	I1212 01:39:27.912308  291455 logs.go:282] 0 containers: []
	W1212 01:39:27.912317  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:27.912323  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:27.912382  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:27.936668  291455 cri.go:89] found id: ""
	I1212 01:39:27.936693  291455 logs.go:282] 0 containers: []
	W1212 01:39:27.936701  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:27.936707  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:27.936763  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:27.964911  291455 cri.go:89] found id: ""
	I1212 01:39:27.964936  291455 logs.go:282] 0 containers: []
	W1212 01:39:27.964945  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:27.964952  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:27.965011  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:27.988509  291455 cri.go:89] found id: ""
	I1212 01:39:27.988530  291455 logs.go:282] 0 containers: []
	W1212 01:39:27.988539  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:27.988545  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:27.988606  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:28.014439  291455 cri.go:89] found id: ""
	I1212 01:39:28.014461  291455 logs.go:282] 0 containers: []
	W1212 01:39:28.014469  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:28.014475  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:28.014542  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:28.040611  291455 cri.go:89] found id: ""
	I1212 01:39:28.040637  291455 logs.go:282] 0 containers: []
	W1212 01:39:28.040646  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:28.040652  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:28.040711  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:28.064823  291455 cri.go:89] found id: ""
	I1212 01:39:28.064844  291455 logs.go:282] 0 containers: []
	W1212 01:39:28.064852  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:28.064858  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:28.064922  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:28.089374  291455 cri.go:89] found id: ""
	I1212 01:39:28.089397  291455 logs.go:282] 0 containers: []
	W1212 01:39:28.089405  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:28.089414  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:28.089426  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:28.146024  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:28.146058  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:28.160130  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:28.160159  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:28.225838  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:28.217551    9107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:28.218334    9107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:28.219917    9107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:28.220532    9107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:28.222161    9107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:28.217551    9107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:28.218334    9107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:28.219917    9107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:28.220532    9107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:28.222161    9107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:28.225864  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:28.225878  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:28.250733  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:28.250768  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:30.798068  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:30.808169  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:30.808239  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:30.836768  291455 cri.go:89] found id: ""
	I1212 01:39:30.836789  291455 logs.go:282] 0 containers: []
	W1212 01:39:30.836798  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:30.836805  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:30.836863  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:30.860144  291455 cri.go:89] found id: ""
	I1212 01:39:30.860169  291455 logs.go:282] 0 containers: []
	W1212 01:39:30.860179  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:30.860185  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:30.860242  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:30.884081  291455 cri.go:89] found id: ""
	I1212 01:39:30.884107  291455 logs.go:282] 0 containers: []
	W1212 01:39:30.884116  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:30.884122  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:30.884180  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:30.908110  291455 cri.go:89] found id: ""
	I1212 01:39:30.908133  291455 logs.go:282] 0 containers: []
	W1212 01:39:30.908147  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:30.908153  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:30.908213  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:30.934406  291455 cri.go:89] found id: ""
	I1212 01:39:30.934428  291455 logs.go:282] 0 containers: []
	W1212 01:39:30.934436  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:30.934449  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:30.934507  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:30.962854  291455 cri.go:89] found id: ""
	I1212 01:39:30.962877  291455 logs.go:282] 0 containers: []
	W1212 01:39:30.962885  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:30.962891  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:30.962963  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:30.986340  291455 cri.go:89] found id: ""
	I1212 01:39:30.986366  291455 logs.go:282] 0 containers: []
	W1212 01:39:30.986375  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:30.986385  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:30.986447  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:31.021526  291455 cri.go:89] found id: ""
	I1212 01:39:31.021557  291455 logs.go:282] 0 containers: []
	W1212 01:39:31.021567  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:31.021576  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:31.021586  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:31.080147  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:31.080186  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:31.094865  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:31.094894  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:31.159994  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:31.150850    9220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:31.151532    9220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:31.153295    9220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:31.153811    9220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:31.156044    9220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:31.150850    9220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:31.151532    9220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:31.153295    9220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:31.153811    9220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:31.156044    9220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:31.160017  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:31.160030  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:31.187806  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:31.187844  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:33.721677  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:33.732122  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:33.732196  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:33.756604  291455 cri.go:89] found id: ""
	I1212 01:39:33.756627  291455 logs.go:282] 0 containers: []
	W1212 01:39:33.756636  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:33.756642  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:33.756703  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:33.782055  291455 cri.go:89] found id: ""
	I1212 01:39:33.782079  291455 logs.go:282] 0 containers: []
	W1212 01:39:33.782088  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:33.782094  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:33.782150  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:33.806217  291455 cri.go:89] found id: ""
	I1212 01:39:33.806242  291455 logs.go:282] 0 containers: []
	W1212 01:39:33.806250  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:33.806256  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:33.806313  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:33.829556  291455 cri.go:89] found id: ""
	I1212 01:39:33.829580  291455 logs.go:282] 0 containers: []
	W1212 01:39:33.829588  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:33.829595  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:33.829651  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:33.856222  291455 cri.go:89] found id: ""
	I1212 01:39:33.856251  291455 logs.go:282] 0 containers: []
	W1212 01:39:33.856259  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:33.856265  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:33.856323  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:33.886601  291455 cri.go:89] found id: ""
	I1212 01:39:33.886624  291455 logs.go:282] 0 containers: []
	W1212 01:39:33.886639  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:33.886646  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:33.886703  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:33.910597  291455 cri.go:89] found id: ""
	I1212 01:39:33.910621  291455 logs.go:282] 0 containers: []
	W1212 01:39:33.910630  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:33.910636  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:33.910701  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:33.934158  291455 cri.go:89] found id: ""
	I1212 01:39:33.934185  291455 logs.go:282] 0 containers: []
	W1212 01:39:33.934193  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:33.934202  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:33.934214  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:33.958501  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:33.958533  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:33.986448  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:33.986476  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:34.042064  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:34.042099  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:34.056951  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:34.056977  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:34.127667  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:34.120136    9345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:34.120757    9345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:34.121818    9345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:34.122189    9345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:34.123766    9345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:34.120136    9345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:34.120757    9345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:34.121818    9345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:34.122189    9345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:34.123766    9345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:36.628762  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:36.639233  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:36.639305  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:36.663673  291455 cri.go:89] found id: ""
	I1212 01:39:36.663698  291455 logs.go:282] 0 containers: []
	W1212 01:39:36.663706  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:36.663713  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:36.663793  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:36.688862  291455 cri.go:89] found id: ""
	I1212 01:39:36.688887  291455 logs.go:282] 0 containers: []
	W1212 01:39:36.688895  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:36.688901  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:36.688963  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:36.713353  291455 cri.go:89] found id: ""
	I1212 01:39:36.713377  291455 logs.go:282] 0 containers: []
	W1212 01:39:36.713386  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:36.713392  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:36.713451  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:36.740680  291455 cri.go:89] found id: ""
	I1212 01:39:36.740747  291455 logs.go:282] 0 containers: []
	W1212 01:39:36.740762  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:36.740769  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:36.740831  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:36.765582  291455 cri.go:89] found id: ""
	I1212 01:39:36.765657  291455 logs.go:282] 0 containers: []
	W1212 01:39:36.765679  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:36.765700  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:36.765788  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:36.790002  291455 cri.go:89] found id: ""
	I1212 01:39:36.790025  291455 logs.go:282] 0 containers: []
	W1212 01:39:36.790034  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:36.790040  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:36.790099  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:36.816694  291455 cri.go:89] found id: ""
	I1212 01:39:36.816717  291455 logs.go:282] 0 containers: []
	W1212 01:39:36.816728  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:36.816734  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:36.816793  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:36.845138  291455 cri.go:89] found id: ""
	I1212 01:39:36.845202  291455 logs.go:282] 0 containers: []
	W1212 01:39:36.845218  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:36.845229  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:36.845241  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:36.903016  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:36.903054  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:36.918866  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:36.918903  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:36.984787  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:36.976973    9447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:36.977356    9447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:36.978879    9447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:36.979250    9447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:36.980908    9447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:36.976973    9447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:36.977356    9447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:36.978879    9447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:36.979250    9447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:36.980908    9447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:36.984810  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:36.984821  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:37.009360  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:37.009399  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:39.548914  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:39.562684  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:39.562807  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:39.598338  291455 cri.go:89] found id: ""
	I1212 01:39:39.598363  291455 logs.go:282] 0 containers: []
	W1212 01:39:39.598372  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:39.598378  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:39.598435  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:39.636963  291455 cri.go:89] found id: ""
	I1212 01:39:39.636985  291455 logs.go:282] 0 containers: []
	W1212 01:39:39.636993  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:39.636999  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:39.637057  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:39.662905  291455 cri.go:89] found id: ""
	I1212 01:39:39.662928  291455 logs.go:282] 0 containers: []
	W1212 01:39:39.662936  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:39.662942  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:39.663047  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:39.686743  291455 cri.go:89] found id: ""
	I1212 01:39:39.686808  291455 logs.go:282] 0 containers: []
	W1212 01:39:39.686819  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:39.686826  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:39.686892  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:39.710381  291455 cri.go:89] found id: ""
	I1212 01:39:39.710452  291455 logs.go:282] 0 containers: []
	W1212 01:39:39.710476  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:39.710496  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:39.710581  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:39.734865  291455 cri.go:89] found id: ""
	I1212 01:39:39.734894  291455 logs.go:282] 0 containers: []
	W1212 01:39:39.734903  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:39.734910  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:39.735019  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:39.762787  291455 cri.go:89] found id: ""
	I1212 01:39:39.762813  291455 logs.go:282] 0 containers: []
	W1212 01:39:39.762822  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:39.762828  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:39.762940  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:39.788339  291455 cri.go:89] found id: ""
	I1212 01:39:39.788368  291455 logs.go:282] 0 containers: []
	W1212 01:39:39.788378  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:39.788388  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:39.788417  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:39.843014  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:39.843046  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:39.856565  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:39.856593  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:39.921611  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:39.913029    9558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:39.913865    9558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:39.915691    9558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:39.916129    9558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:39.917607    9558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:39.913029    9558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:39.913865    9558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:39.915691    9558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:39.916129    9558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:39.917607    9558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:39.921632  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:39.921644  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:39.948006  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:39.948039  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:42.479881  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:42.490524  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:42.490602  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:42.515563  291455 cri.go:89] found id: ""
	I1212 01:39:42.515641  291455 logs.go:282] 0 containers: []
	W1212 01:39:42.515656  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:42.515664  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:42.515725  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:42.560104  291455 cri.go:89] found id: ""
	I1212 01:39:42.560137  291455 logs.go:282] 0 containers: []
	W1212 01:39:42.560145  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:42.560152  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:42.560226  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:42.597091  291455 cri.go:89] found id: ""
	I1212 01:39:42.597131  291455 logs.go:282] 0 containers: []
	W1212 01:39:42.597140  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:42.597147  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:42.597219  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:42.629203  291455 cri.go:89] found id: ""
	I1212 01:39:42.629233  291455 logs.go:282] 0 containers: []
	W1212 01:39:42.629242  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:42.629248  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:42.629312  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:42.657935  291455 cri.go:89] found id: ""
	I1212 01:39:42.657959  291455 logs.go:282] 0 containers: []
	W1212 01:39:42.657968  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:42.657974  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:42.658039  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:42.684776  291455 cri.go:89] found id: ""
	I1212 01:39:42.684806  291455 logs.go:282] 0 containers: []
	W1212 01:39:42.684815  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:42.684822  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:42.684879  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:42.709384  291455 cri.go:89] found id: ""
	I1212 01:39:42.709419  291455 logs.go:282] 0 containers: []
	W1212 01:39:42.709429  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:42.709435  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:42.709505  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:42.733686  291455 cri.go:89] found id: ""
	I1212 01:39:42.733728  291455 logs.go:282] 0 containers: []
	W1212 01:39:42.733737  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:42.733747  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:42.733758  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:42.758552  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:42.758630  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:42.787823  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:42.787852  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:42.845099  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:42.845135  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:42.858856  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:42.858904  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:42.924089  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:42.915364    9684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:42.915778    9684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:42.917282    9684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:42.918788    9684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:42.920024    9684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:42.915364    9684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:42.915778    9684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:42.917282    9684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:42.918788    9684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:42.920024    9684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:45.424349  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:45.434772  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:45.434853  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:45.459272  291455 cri.go:89] found id: ""
	I1212 01:39:45.459297  291455 logs.go:282] 0 containers: []
	W1212 01:39:45.459306  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:45.459351  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:45.459482  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:45.485201  291455 cri.go:89] found id: ""
	I1212 01:39:45.485235  291455 logs.go:282] 0 containers: []
	W1212 01:39:45.485244  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:45.485266  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:45.485348  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:45.508997  291455 cri.go:89] found id: ""
	I1212 01:39:45.509022  291455 logs.go:282] 0 containers: []
	W1212 01:39:45.509031  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:45.509037  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:45.509094  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:45.533177  291455 cri.go:89] found id: ""
	I1212 01:39:45.533209  291455 logs.go:282] 0 containers: []
	W1212 01:39:45.533218  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:45.533224  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:45.533289  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:45.594513  291455 cri.go:89] found id: ""
	I1212 01:39:45.594538  291455 logs.go:282] 0 containers: []
	W1212 01:39:45.594546  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:45.594553  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:45.594617  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:45.627865  291455 cri.go:89] found id: ""
	I1212 01:39:45.627903  291455 logs.go:282] 0 containers: []
	W1212 01:39:45.627913  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:45.627919  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:45.627987  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:45.655026  291455 cri.go:89] found id: ""
	I1212 01:39:45.655049  291455 logs.go:282] 0 containers: []
	W1212 01:39:45.655058  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:45.655064  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:45.655127  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:45.680560  291455 cri.go:89] found id: ""
	I1212 01:39:45.680635  291455 logs.go:282] 0 containers: []
	W1212 01:39:45.680650  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:45.680660  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:45.680672  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:45.744860  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:45.736290    9776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:45.736804    9776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:45.738503    9776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:45.739012    9776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:45.740483    9776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:45.736290    9776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:45.736804    9776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:45.738503    9776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:45.739012    9776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:45.740483    9776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:45.744886  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:45.744908  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:45.770100  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:45.770135  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:45.797429  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:45.797455  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:45.853262  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:45.853296  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:48.367022  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:48.378069  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:48.378145  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:48.403921  291455 cri.go:89] found id: ""
	I1212 01:39:48.403943  291455 logs.go:282] 0 containers: []
	W1212 01:39:48.403952  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:48.403958  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:48.404016  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:48.428988  291455 cri.go:89] found id: ""
	I1212 01:39:48.429012  291455 logs.go:282] 0 containers: []
	W1212 01:39:48.429020  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:48.429027  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:48.429084  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:48.453105  291455 cri.go:89] found id: ""
	I1212 01:39:48.453128  291455 logs.go:282] 0 containers: []
	W1212 01:39:48.453137  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:48.453143  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:48.453201  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:48.477514  291455 cri.go:89] found id: ""
	I1212 01:39:48.477536  291455 logs.go:282] 0 containers: []
	W1212 01:39:48.477546  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:48.477551  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:48.477612  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:48.506708  291455 cri.go:89] found id: ""
	I1212 01:39:48.506730  291455 logs.go:282] 0 containers: []
	W1212 01:39:48.506738  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:48.506743  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:48.506801  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:48.546136  291455 cri.go:89] found id: ""
	I1212 01:39:48.546158  291455 logs.go:282] 0 containers: []
	W1212 01:39:48.546166  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:48.546172  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:48.546230  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:48.584757  291455 cri.go:89] found id: ""
	I1212 01:39:48.584778  291455 logs.go:282] 0 containers: []
	W1212 01:39:48.584787  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:48.584792  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:48.584860  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:48.624953  291455 cri.go:89] found id: ""
	I1212 01:39:48.624973  291455 logs.go:282] 0 containers: []
	W1212 01:39:48.624981  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:48.624989  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:48.625000  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:48.682582  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:48.682616  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:48.696819  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:48.696847  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:48.761964  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:48.752888    9897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:48.753667    9897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:48.755472    9897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:48.756105    9897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:48.757916    9897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:48.752888    9897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:48.753667    9897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:48.755472    9897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:48.756105    9897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:48.757916    9897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:48.761982  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:48.761994  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:48.787735  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:48.787766  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:51.315518  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:51.325805  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:51.325878  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:51.348771  291455 cri.go:89] found id: ""
	I1212 01:39:51.348797  291455 logs.go:282] 0 containers: []
	W1212 01:39:51.348806  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:51.348812  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:51.348892  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:51.372310  291455 cri.go:89] found id: ""
	I1212 01:39:51.372384  291455 logs.go:282] 0 containers: []
	W1212 01:39:51.372399  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:51.372406  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:51.372463  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:51.410822  291455 cri.go:89] found id: ""
	I1212 01:39:51.410855  291455 logs.go:282] 0 containers: []
	W1212 01:39:51.410865  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:51.410871  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:51.410935  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:51.434671  291455 cri.go:89] found id: ""
	I1212 01:39:51.434702  291455 logs.go:282] 0 containers: []
	W1212 01:39:51.434710  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:51.434716  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:51.434783  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:51.459981  291455 cri.go:89] found id: ""
	I1212 01:39:51.460054  291455 logs.go:282] 0 containers: []
	W1212 01:39:51.460070  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:51.460077  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:51.460134  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:51.484764  291455 cri.go:89] found id: ""
	I1212 01:39:51.484788  291455 logs.go:282] 0 containers: []
	W1212 01:39:51.484802  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:51.484808  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:51.484864  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:51.508943  291455 cri.go:89] found id: ""
	I1212 01:39:51.508966  291455 logs.go:282] 0 containers: []
	W1212 01:39:51.508974  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:51.508981  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:51.509040  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:51.553470  291455 cri.go:89] found id: ""
	I1212 01:39:51.553497  291455 logs.go:282] 0 containers: []
	W1212 01:39:51.553505  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:51.553514  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:51.553525  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:51.653146  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:51.645161   10003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:51.645898   10003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:51.647455   10003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:51.647736   10003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:51.649182   10003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:51.645161   10003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:51.645898   10003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:51.647455   10003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:51.647736   10003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:51.649182   10003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:51.653168  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:51.653179  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:51.679418  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:51.679450  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:51.709581  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:51.709607  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:51.764844  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:51.764878  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:54.280411  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:54.290776  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:54.290856  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:54.315212  291455 cri.go:89] found id: ""
	I1212 01:39:54.315236  291455 logs.go:282] 0 containers: []
	W1212 01:39:54.315246  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:54.315253  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:54.315311  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:54.339856  291455 cri.go:89] found id: ""
	I1212 01:39:54.339881  291455 logs.go:282] 0 containers: []
	W1212 01:39:54.339890  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:54.339896  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:54.339958  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:54.368679  291455 cri.go:89] found id: ""
	I1212 01:39:54.368702  291455 logs.go:282] 0 containers: []
	W1212 01:39:54.368711  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:54.368717  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:54.368776  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:54.393467  291455 cri.go:89] found id: ""
	I1212 01:39:54.393491  291455 logs.go:282] 0 containers: []
	W1212 01:39:54.393500  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:54.393507  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:54.393566  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:54.418691  291455 cri.go:89] found id: ""
	I1212 01:39:54.418713  291455 logs.go:282] 0 containers: []
	W1212 01:39:54.418722  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:54.418728  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:54.418785  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:54.444722  291455 cri.go:89] found id: ""
	I1212 01:39:54.444745  291455 logs.go:282] 0 containers: []
	W1212 01:39:54.444759  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:54.444766  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:54.444824  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:54.470007  291455 cri.go:89] found id: ""
	I1212 01:39:54.470029  291455 logs.go:282] 0 containers: []
	W1212 01:39:54.470037  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:54.470043  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:54.470104  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:54.494270  291455 cri.go:89] found id: ""
	I1212 01:39:54.494340  291455 logs.go:282] 0 containers: []
	W1212 01:39:54.494354  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:54.494364  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:54.494403  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:54.599318  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:54.577598   10110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:54.579503   10110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:54.592839   10110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:54.593570   10110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:54.595243   10110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:54.577598   10110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:54.579503   10110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:54.592839   10110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:54.593570   10110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:54.595243   10110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:54.599389  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:54.599417  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:54.630152  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:54.630190  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:54.658141  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:54.658167  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:54.713516  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:54.713551  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:57.227361  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:57.237887  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:57.237955  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:57.262202  291455 cri.go:89] found id: ""
	I1212 01:39:57.262227  291455 logs.go:282] 0 containers: []
	W1212 01:39:57.262236  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:57.262242  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:57.262299  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:57.287795  291455 cri.go:89] found id: ""
	I1212 01:39:57.287819  291455 logs.go:282] 0 containers: []
	W1212 01:39:57.287828  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:57.287834  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:57.287900  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:57.312347  291455 cri.go:89] found id: ""
	I1212 01:39:57.312372  291455 logs.go:282] 0 containers: []
	W1212 01:39:57.312381  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:57.312387  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:57.312448  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:57.340890  291455 cri.go:89] found id: ""
	I1212 01:39:57.340914  291455 logs.go:282] 0 containers: []
	W1212 01:39:57.340924  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:57.340930  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:57.340994  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:57.364578  291455 cri.go:89] found id: ""
	I1212 01:39:57.364643  291455 logs.go:282] 0 containers: []
	W1212 01:39:57.364658  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:57.364666  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:57.364735  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:57.389147  291455 cri.go:89] found id: ""
	I1212 01:39:57.389175  291455 logs.go:282] 0 containers: []
	W1212 01:39:57.389184  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:57.389191  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:57.389248  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:57.415275  291455 cri.go:89] found id: ""
	I1212 01:39:57.415300  291455 logs.go:282] 0 containers: []
	W1212 01:39:57.415315  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:57.415322  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:57.415385  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:57.440087  291455 cri.go:89] found id: ""
	I1212 01:39:57.440109  291455 logs.go:282] 0 containers: []
	W1212 01:39:57.440118  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:57.440127  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:57.440138  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:57.467124  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:57.467150  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:57.522232  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:57.522269  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:57.538082  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:57.538160  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:57.643552  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:57.631917   10245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:57.632540   10245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:57.634329   10245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:57.634855   10245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:57.636638   10245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:57.631917   10245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:57.632540   10245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:57.634329   10245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:57.634855   10245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:57.636638   10245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:57.643574  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:57.643586  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:00.169313  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:00.228741  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:00.228823  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:00.303835  291455 cri.go:89] found id: ""
	I1212 01:40:00.305186  291455 logs.go:282] 0 containers: []
	W1212 01:40:00.305353  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:00.309177  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:00.309371  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:00.363791  291455 cri.go:89] found id: ""
	I1212 01:40:00.363817  291455 logs.go:282] 0 containers: []
	W1212 01:40:00.363826  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:00.363832  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:00.363904  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:00.428687  291455 cri.go:89] found id: ""
	I1212 01:40:00.428710  291455 logs.go:282] 0 containers: []
	W1212 01:40:00.428720  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:00.428727  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:00.428821  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:00.471696  291455 cri.go:89] found id: ""
	I1212 01:40:00.471723  291455 logs.go:282] 0 containers: []
	W1212 01:40:00.471732  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:00.471740  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:00.471820  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:00.509321  291455 cri.go:89] found id: ""
	I1212 01:40:00.509347  291455 logs.go:282] 0 containers: []
	W1212 01:40:00.509372  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:00.509381  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:00.509460  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:00.593692  291455 cri.go:89] found id: ""
	I1212 01:40:00.593716  291455 logs.go:282] 0 containers: []
	W1212 01:40:00.593725  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:00.593732  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:00.593800  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:00.662781  291455 cri.go:89] found id: ""
	I1212 01:40:00.662804  291455 logs.go:282] 0 containers: []
	W1212 01:40:00.662813  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:00.662819  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:00.662912  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:00.689999  291455 cri.go:89] found id: ""
	I1212 01:40:00.690023  291455 logs.go:282] 0 containers: []
	W1212 01:40:00.690031  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:00.690041  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:00.690053  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:00.747296  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:00.747331  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:00.761427  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:00.761454  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:00.828444  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:00.819830   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:00.820596   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:00.822241   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:00.822754   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:00.824365   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:00.819830   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:00.820596   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:00.822241   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:00.822754   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:00.824365   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:00.828466  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:00.828479  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:00.855218  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:00.855254  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:03.387867  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:03.398566  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:03.398659  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:03.427352  291455 cri.go:89] found id: ""
	I1212 01:40:03.427376  291455 logs.go:282] 0 containers: []
	W1212 01:40:03.427385  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:03.427391  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:03.427456  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:03.451979  291455 cri.go:89] found id: ""
	I1212 01:40:03.452054  291455 logs.go:282] 0 containers: []
	W1212 01:40:03.452069  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:03.452076  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:03.452150  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:03.475705  291455 cri.go:89] found id: ""
	I1212 01:40:03.475729  291455 logs.go:282] 0 containers: []
	W1212 01:40:03.475739  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:03.475744  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:03.475831  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:03.500258  291455 cri.go:89] found id: ""
	I1212 01:40:03.500283  291455 logs.go:282] 0 containers: []
	W1212 01:40:03.500293  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:03.500300  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:03.500360  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:03.528939  291455 cri.go:89] found id: ""
	I1212 01:40:03.528962  291455 logs.go:282] 0 containers: []
	W1212 01:40:03.528971  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:03.528976  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:03.529037  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:03.557541  291455 cri.go:89] found id: ""
	I1212 01:40:03.557566  291455 logs.go:282] 0 containers: []
	W1212 01:40:03.557575  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:03.557581  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:03.557645  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:03.611801  291455 cri.go:89] found id: ""
	I1212 01:40:03.611827  291455 logs.go:282] 0 containers: []
	W1212 01:40:03.611837  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:03.611843  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:03.611906  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:03.641008  291455 cri.go:89] found id: ""
	I1212 01:40:03.641034  291455 logs.go:282] 0 containers: []
	W1212 01:40:03.641043  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:03.641053  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:03.641064  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:03.696830  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:03.696868  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:03.710227  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:03.710256  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:03.777119  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:03.769143   10461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:03.769540   10461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:03.771066   10461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:03.771655   10461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:03.773341   10461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:03.769143   10461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:03.769540   10461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:03.771066   10461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:03.771655   10461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:03.773341   10461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:03.777184  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:03.777203  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:03.802465  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:03.802497  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:06.331826  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:06.342482  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:06.342547  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:06.366505  291455 cri.go:89] found id: ""
	I1212 01:40:06.366527  291455 logs.go:282] 0 containers: []
	W1212 01:40:06.366536  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:06.366542  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:06.366599  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:06.391672  291455 cri.go:89] found id: ""
	I1212 01:40:06.391696  291455 logs.go:282] 0 containers: []
	W1212 01:40:06.391705  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:06.391711  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:06.391774  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:06.416914  291455 cri.go:89] found id: ""
	I1212 01:40:06.416941  291455 logs.go:282] 0 containers: []
	W1212 01:40:06.416950  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:06.416956  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:06.417031  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:06.441562  291455 cri.go:89] found id: ""
	I1212 01:40:06.441584  291455 logs.go:282] 0 containers: []
	W1212 01:40:06.441599  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:06.441606  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:06.441665  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:06.469918  291455 cri.go:89] found id: ""
	I1212 01:40:06.469942  291455 logs.go:282] 0 containers: []
	W1212 01:40:06.469951  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:06.469957  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:06.470014  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:06.494455  291455 cri.go:89] found id: ""
	I1212 01:40:06.494478  291455 logs.go:282] 0 containers: []
	W1212 01:40:06.494487  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:06.494503  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:06.494579  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:06.520013  291455 cri.go:89] found id: ""
	I1212 01:40:06.520037  291455 logs.go:282] 0 containers: []
	W1212 01:40:06.520046  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:06.520052  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:06.520108  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:06.571478  291455 cri.go:89] found id: ""
	I1212 01:40:06.571509  291455 logs.go:282] 0 containers: []
	W1212 01:40:06.571518  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:06.571528  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:06.571539  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:06.616555  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:06.616594  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:06.657561  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:06.657589  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:06.715328  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:06.715409  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:06.728591  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:06.728620  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:06.792104  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:06.783643   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:06.784436   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:06.785957   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:06.786254   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:06.787744   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:06.783643   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:06.784436   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:06.785957   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:06.786254   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:06.787744   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:09.292912  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:09.303462  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:09.303537  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:09.329031  291455 cri.go:89] found id: ""
	I1212 01:40:09.329057  291455 logs.go:282] 0 containers: []
	W1212 01:40:09.329066  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:09.329072  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:09.329188  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:09.353474  291455 cri.go:89] found id: ""
	I1212 01:40:09.353498  291455 logs.go:282] 0 containers: []
	W1212 01:40:09.353507  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:09.353513  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:09.353570  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:09.380805  291455 cri.go:89] found id: ""
	I1212 01:40:09.380830  291455 logs.go:282] 0 containers: []
	W1212 01:40:09.380839  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:09.380845  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:09.380959  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:09.408831  291455 cri.go:89] found id: ""
	I1212 01:40:09.408854  291455 logs.go:282] 0 containers: []
	W1212 01:40:09.408862  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:09.408868  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:09.408943  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:09.433352  291455 cri.go:89] found id: ""
	I1212 01:40:09.433374  291455 logs.go:282] 0 containers: []
	W1212 01:40:09.433383  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:09.433389  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:09.433450  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:09.458129  291455 cri.go:89] found id: ""
	I1212 01:40:09.458149  291455 logs.go:282] 0 containers: []
	W1212 01:40:09.458158  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:09.458165  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:09.458222  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:09.484528  291455 cri.go:89] found id: ""
	I1212 01:40:09.484552  291455 logs.go:282] 0 containers: []
	W1212 01:40:09.484560  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:09.484567  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:09.484624  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:09.512777  291455 cri.go:89] found id: ""
	I1212 01:40:09.512802  291455 logs.go:282] 0 containers: []
	W1212 01:40:09.512811  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:09.512820  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:09.512831  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:09.563517  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:09.563545  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:09.660558  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:09.660595  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:09.674516  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:09.674541  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:09.738215  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:09.730040   10697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:09.730861   10697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:09.732394   10697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:09.732881   10697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:09.734347   10697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:09.730040   10697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:09.730861   10697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:09.732394   10697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:09.732881   10697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:09.734347   10697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:09.738241  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:09.738253  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:12.263748  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:12.273959  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:12.274029  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:12.297055  291455 cri.go:89] found id: ""
	I1212 01:40:12.297087  291455 logs.go:282] 0 containers: []
	W1212 01:40:12.297096  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:12.297118  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:12.297179  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:12.322284  291455 cri.go:89] found id: ""
	I1212 01:40:12.322308  291455 logs.go:282] 0 containers: []
	W1212 01:40:12.322317  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:12.322323  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:12.322397  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:12.345905  291455 cri.go:89] found id: ""
	I1212 01:40:12.345929  291455 logs.go:282] 0 containers: []
	W1212 01:40:12.345938  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:12.345944  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:12.346024  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:12.370571  291455 cri.go:89] found id: ""
	I1212 01:40:12.370593  291455 logs.go:282] 0 containers: []
	W1212 01:40:12.370602  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:12.370608  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:12.370695  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:12.397426  291455 cri.go:89] found id: ""
	I1212 01:40:12.397473  291455 logs.go:282] 0 containers: []
	W1212 01:40:12.397495  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:12.397514  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:12.397602  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:12.426531  291455 cri.go:89] found id: ""
	I1212 01:40:12.426556  291455 logs.go:282] 0 containers: []
	W1212 01:40:12.426564  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:12.426571  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:12.426644  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:12.450837  291455 cri.go:89] found id: ""
	I1212 01:40:12.450864  291455 logs.go:282] 0 containers: []
	W1212 01:40:12.450874  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:12.450882  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:12.450941  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:12.475392  291455 cri.go:89] found id: ""
	I1212 01:40:12.475415  291455 logs.go:282] 0 containers: []
	W1212 01:40:12.475423  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:12.475433  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:12.475443  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:12.500596  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:12.500630  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:12.539878  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:12.539912  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:12.636980  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:12.637024  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:12.651233  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:12.651261  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:12.719321  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:12.710320   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:12.711168   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:12.712905   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:12.713556   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:12.715342   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:12.710320   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:12.711168   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:12.712905   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:12.713556   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:12.715342   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:15.219607  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:15.230736  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:15.230837  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:15.255192  291455 cri.go:89] found id: ""
	I1212 01:40:15.255216  291455 logs.go:282] 0 containers: []
	W1212 01:40:15.255225  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:15.255250  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:15.255312  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:15.280065  291455 cri.go:89] found id: ""
	I1212 01:40:15.280088  291455 logs.go:282] 0 containers: []
	W1212 01:40:15.280097  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:15.280103  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:15.280182  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:15.305428  291455 cri.go:89] found id: ""
	I1212 01:40:15.305451  291455 logs.go:282] 0 containers: []
	W1212 01:40:15.305460  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:15.305467  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:15.305533  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:15.329513  291455 cri.go:89] found id: ""
	I1212 01:40:15.329537  291455 logs.go:282] 0 containers: []
	W1212 01:40:15.329545  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:15.329552  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:15.329612  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:15.353724  291455 cri.go:89] found id: ""
	I1212 01:40:15.353748  291455 logs.go:282] 0 containers: []
	W1212 01:40:15.353757  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:15.353764  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:15.353821  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:15.379891  291455 cri.go:89] found id: ""
	I1212 01:40:15.379921  291455 logs.go:282] 0 containers: []
	W1212 01:40:15.379930  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:15.379936  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:15.379994  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:15.410206  291455 cri.go:89] found id: ""
	I1212 01:40:15.410232  291455 logs.go:282] 0 containers: []
	W1212 01:40:15.410242  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:15.410249  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:15.410308  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:15.436574  291455 cri.go:89] found id: ""
	I1212 01:40:15.436607  291455 logs.go:282] 0 containers: []
	W1212 01:40:15.436616  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:15.436628  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:15.436640  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:15.496631  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:15.496672  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:15.511586  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:15.511614  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:15.643166  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:15.635198   10905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:15.635698   10905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:15.637279   10905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:15.637833   10905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:15.639441   10905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:15.635198   10905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:15.635698   10905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:15.637279   10905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:15.637833   10905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:15.639441   10905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:15.643192  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:15.643208  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:15.668006  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:15.668044  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:18.199232  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:18.210162  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:18.210237  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:18.235304  291455 cri.go:89] found id: ""
	I1212 01:40:18.235330  291455 logs.go:282] 0 containers: []
	W1212 01:40:18.235339  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:18.235347  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:18.235412  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:18.261126  291455 cri.go:89] found id: ""
	I1212 01:40:18.261149  291455 logs.go:282] 0 containers: []
	W1212 01:40:18.261157  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:18.261163  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:18.261225  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:18.285920  291455 cri.go:89] found id: ""
	I1212 01:40:18.285946  291455 logs.go:282] 0 containers: []
	W1212 01:40:18.285954  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:18.285961  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:18.286056  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:18.310447  291455 cri.go:89] found id: ""
	I1212 01:40:18.310490  291455 logs.go:282] 0 containers: []
	W1212 01:40:18.310500  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:18.310523  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:18.310601  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:18.334613  291455 cri.go:89] found id: ""
	I1212 01:40:18.334643  291455 logs.go:282] 0 containers: []
	W1212 01:40:18.334653  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:18.334659  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:18.334725  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:18.363763  291455 cri.go:89] found id: ""
	I1212 01:40:18.363787  291455 logs.go:282] 0 containers: []
	W1212 01:40:18.363797  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:18.363803  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:18.363864  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:18.389696  291455 cri.go:89] found id: ""
	I1212 01:40:18.389730  291455 logs.go:282] 0 containers: []
	W1212 01:40:18.389739  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:18.389745  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:18.389812  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:18.416961  291455 cri.go:89] found id: ""
	I1212 01:40:18.417035  291455 logs.go:282] 0 containers: []
	W1212 01:40:18.417059  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:18.417077  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:18.417104  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:18.474235  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:18.474268  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:18.487640  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:18.487666  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:18.567561  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:18.554594   11020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:18.555595   11020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:18.560540   11020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:18.560843   11020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:18.562408   11020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:18.554594   11020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:18.555595   11020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:18.560540   11020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:18.560843   11020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:18.562408   11020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:18.567584  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:18.567597  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:18.597523  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:18.597557  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:21.132296  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:21.142685  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:21.142760  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:21.171993  291455 cri.go:89] found id: ""
	I1212 01:40:21.172020  291455 logs.go:282] 0 containers: []
	W1212 01:40:21.172029  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:21.172035  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:21.172096  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:21.195907  291455 cri.go:89] found id: ""
	I1212 01:40:21.195929  291455 logs.go:282] 0 containers: []
	W1212 01:40:21.195938  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:21.195944  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:21.196007  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:21.219496  291455 cri.go:89] found id: ""
	I1212 01:40:21.219524  291455 logs.go:282] 0 containers: []
	W1212 01:40:21.219533  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:21.219540  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:21.219601  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:21.243807  291455 cri.go:89] found id: ""
	I1212 01:40:21.243834  291455 logs.go:282] 0 containers: []
	W1212 01:40:21.243844  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:21.243850  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:21.243910  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:21.268956  291455 cri.go:89] found id: ""
	I1212 01:40:21.268977  291455 logs.go:282] 0 containers: []
	W1212 01:40:21.268986  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:21.268993  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:21.269052  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:21.297557  291455 cri.go:89] found id: ""
	I1212 01:40:21.297580  291455 logs.go:282] 0 containers: []
	W1212 01:40:21.297588  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:21.297595  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:21.297652  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:21.321755  291455 cri.go:89] found id: ""
	I1212 01:40:21.321776  291455 logs.go:282] 0 containers: []
	W1212 01:40:21.321791  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:21.321798  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:21.321861  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:21.349054  291455 cri.go:89] found id: ""
	I1212 01:40:21.349076  291455 logs.go:282] 0 containers: []
	W1212 01:40:21.349085  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:21.349094  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:21.349108  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:21.374597  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:21.374636  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:21.403444  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:21.403469  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:21.461656  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:21.461690  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:21.475293  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:21.475320  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:21.560836  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:21.545907   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:21.546745   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:21.548429   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:21.548732   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:21.550543   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:21.545907   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:21.546745   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:21.548429   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:21.548732   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:21.550543   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:24.061094  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:24.071831  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:24.071913  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:24.097936  291455 cri.go:89] found id: ""
	I1212 01:40:24.097962  291455 logs.go:282] 0 containers: []
	W1212 01:40:24.097971  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:24.097978  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:24.098036  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:24.127785  291455 cri.go:89] found id: ""
	I1212 01:40:24.127809  291455 logs.go:282] 0 containers: []
	W1212 01:40:24.127819  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:24.127826  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:24.127889  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:24.153026  291455 cri.go:89] found id: ""
	I1212 01:40:24.153052  291455 logs.go:282] 0 containers: []
	W1212 01:40:24.153063  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:24.153068  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:24.153127  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:24.176972  291455 cri.go:89] found id: ""
	I1212 01:40:24.176997  291455 logs.go:282] 0 containers: []
	W1212 01:40:24.177006  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:24.177013  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:24.177073  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:24.213590  291455 cri.go:89] found id: ""
	I1212 01:40:24.213614  291455 logs.go:282] 0 containers: []
	W1212 01:40:24.213623  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:24.213638  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:24.213696  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:24.241058  291455 cri.go:89] found id: ""
	I1212 01:40:24.241084  291455 logs.go:282] 0 containers: []
	W1212 01:40:24.241092  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:24.241099  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:24.241158  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:24.265936  291455 cri.go:89] found id: ""
	I1212 01:40:24.265977  291455 logs.go:282] 0 containers: []
	W1212 01:40:24.265985  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:24.265991  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:24.266050  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:24.289751  291455 cri.go:89] found id: ""
	I1212 01:40:24.289779  291455 logs.go:282] 0 containers: []
	W1212 01:40:24.289788  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:24.289798  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:24.289809  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:24.316973  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:24.316999  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:24.372346  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:24.372380  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:24.385931  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:24.385960  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:24.453792  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:24.445261   11257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:24.445682   11257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:24.447332   11257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:24.447939   11257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:24.449784   11257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:24.445261   11257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:24.445682   11257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:24.447332   11257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:24.447939   11257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:24.449784   11257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:24.453813  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:24.453826  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:26.980134  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:26.991597  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:26.991671  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:27.019040  291455 cri.go:89] found id: ""
	I1212 01:40:27.019064  291455 logs.go:282] 0 containers: []
	W1212 01:40:27.019073  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:27.019080  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:27.019154  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:27.046812  291455 cri.go:89] found id: ""
	I1212 01:40:27.046841  291455 logs.go:282] 0 containers: []
	W1212 01:40:27.046854  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:27.046860  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:27.046968  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:27.071383  291455 cri.go:89] found id: ""
	I1212 01:40:27.071405  291455 logs.go:282] 0 containers: []
	W1212 01:40:27.071414  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:27.071420  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:27.071490  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:27.095638  291455 cri.go:89] found id: ""
	I1212 01:40:27.095663  291455 logs.go:282] 0 containers: []
	W1212 01:40:27.095672  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:27.095678  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:27.095755  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:27.119028  291455 cri.go:89] found id: ""
	I1212 01:40:27.119050  291455 logs.go:282] 0 containers: []
	W1212 01:40:27.119059  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:27.119064  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:27.119123  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:27.143722  291455 cri.go:89] found id: ""
	I1212 01:40:27.143748  291455 logs.go:282] 0 containers: []
	W1212 01:40:27.143757  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:27.143763  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:27.143839  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:27.167989  291455 cri.go:89] found id: ""
	I1212 01:40:27.168066  291455 logs.go:282] 0 containers: []
	W1212 01:40:27.168088  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:27.168097  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:27.168168  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:27.193229  291455 cri.go:89] found id: ""
	I1212 01:40:27.193269  291455 logs.go:282] 0 containers: []
	W1212 01:40:27.193279  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:27.193289  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:27.193304  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:27.248752  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:27.248788  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:27.262591  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:27.262627  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:27.329086  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:27.321229   11360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:27.321673   11360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:27.323243   11360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:27.323775   11360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:27.325351   11360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:27.321229   11360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:27.321673   11360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:27.323243   11360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:27.323775   11360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:27.325351   11360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:27.329111  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:27.329123  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:27.354405  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:27.354442  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:29.885003  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:29.896299  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:29.896378  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:29.922911  291455 cri.go:89] found id: ""
	I1212 01:40:29.922945  291455 logs.go:282] 0 containers: []
	W1212 01:40:29.922954  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:29.922961  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:29.923063  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:29.949238  291455 cri.go:89] found id: ""
	I1212 01:40:29.949264  291455 logs.go:282] 0 containers: []
	W1212 01:40:29.949273  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:29.949280  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:29.949338  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:29.974510  291455 cri.go:89] found id: ""
	I1212 01:40:29.974536  291455 logs.go:282] 0 containers: []
	W1212 01:40:29.974545  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:29.974551  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:29.974608  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:29.999116  291455 cri.go:89] found id: ""
	I1212 01:40:29.999142  291455 logs.go:282] 0 containers: []
	W1212 01:40:29.999151  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:29.999157  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:29.999223  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:30.078011  291455 cri.go:89] found id: ""
	I1212 01:40:30.078040  291455 logs.go:282] 0 containers: []
	W1212 01:40:30.078050  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:30.078058  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:30.078132  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:30.105966  291455 cri.go:89] found id: ""
	I1212 01:40:30.105993  291455 logs.go:282] 0 containers: []
	W1212 01:40:30.106003  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:30.106010  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:30.106078  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:30.134703  291455 cri.go:89] found id: ""
	I1212 01:40:30.134726  291455 logs.go:282] 0 containers: []
	W1212 01:40:30.134735  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:30.134780  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:30.134874  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:30.161984  291455 cri.go:89] found id: ""
	I1212 01:40:30.162009  291455 logs.go:282] 0 containers: []
	W1212 01:40:30.162018  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:30.162028  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:30.162039  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:30.193075  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:30.193103  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:30.252472  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:30.252508  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:30.266246  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:30.266276  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:30.333852  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:30.325323   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:30.325890   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:30.327426   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:30.327865   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:30.329291   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:30.325323   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:30.325890   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:30.327426   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:30.327865   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:30.329291   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:30.333874  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:30.333886  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:32.860948  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:32.872085  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:32.872163  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:32.901386  291455 cri.go:89] found id: ""
	I1212 01:40:32.901410  291455 logs.go:282] 0 containers: []
	W1212 01:40:32.901425  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:32.901438  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:32.901499  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:32.926818  291455 cri.go:89] found id: ""
	I1212 01:40:32.926844  291455 logs.go:282] 0 containers: []
	W1212 01:40:32.926853  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:32.926859  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:32.926927  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:32.956149  291455 cri.go:89] found id: ""
	I1212 01:40:32.956187  291455 logs.go:282] 0 containers: []
	W1212 01:40:32.956196  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:32.956202  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:32.956259  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:32.988134  291455 cri.go:89] found id: ""
	I1212 01:40:32.988159  291455 logs.go:282] 0 containers: []
	W1212 01:40:32.988168  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:32.988174  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:32.988231  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:33.014432  291455 cri.go:89] found id: ""
	I1212 01:40:33.014459  291455 logs.go:282] 0 containers: []
	W1212 01:40:33.014468  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:33.014474  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:33.014534  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:33.039814  291455 cri.go:89] found id: ""
	I1212 01:40:33.039843  291455 logs.go:282] 0 containers: []
	W1212 01:40:33.039852  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:33.039859  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:33.039921  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:33.068378  291455 cri.go:89] found id: ""
	I1212 01:40:33.068401  291455 logs.go:282] 0 containers: []
	W1212 01:40:33.068410  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:33.068417  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:33.068475  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:33.097661  291455 cri.go:89] found id: ""
	I1212 01:40:33.097725  291455 logs.go:282] 0 containers: []
	W1212 01:40:33.097750  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:33.097775  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:33.097803  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:33.129775  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:33.129802  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:33.189298  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:33.189332  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:33.202981  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:33.203026  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:33.264626  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:33.256228   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:33.256801   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:33.258449   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:33.259112   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:33.260717   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:33.256228   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:33.256801   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:33.258449   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:33.259112   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:33.260717   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:33.264648  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:33.264665  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:35.791109  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:35.807877  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:35.807951  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:35.851416  291455 cri.go:89] found id: ""
	I1212 01:40:35.851442  291455 logs.go:282] 0 containers: []
	W1212 01:40:35.851450  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:35.851456  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:35.851518  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:35.888920  291455 cri.go:89] found id: ""
	I1212 01:40:35.888943  291455 logs.go:282] 0 containers: []
	W1212 01:40:35.888952  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:35.888958  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:35.889018  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:35.915592  291455 cri.go:89] found id: ""
	I1212 01:40:35.915618  291455 logs.go:282] 0 containers: []
	W1212 01:40:35.915628  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:35.915634  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:35.915715  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:35.939272  291455 cri.go:89] found id: ""
	I1212 01:40:35.939296  291455 logs.go:282] 0 containers: []
	W1212 01:40:35.939305  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:35.939311  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:35.939370  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:35.968216  291455 cri.go:89] found id: ""
	I1212 01:40:35.968244  291455 logs.go:282] 0 containers: []
	W1212 01:40:35.968253  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:35.968259  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:35.968317  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:35.993761  291455 cri.go:89] found id: ""
	I1212 01:40:35.993785  291455 logs.go:282] 0 containers: []
	W1212 01:40:35.993796  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:35.993803  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:35.993863  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:36.022585  291455 cri.go:89] found id: ""
	I1212 01:40:36.022612  291455 logs.go:282] 0 containers: []
	W1212 01:40:36.022633  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:36.022640  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:36.022712  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:36.052933  291455 cri.go:89] found id: ""
	I1212 01:40:36.052955  291455 logs.go:282] 0 containers: []
	W1212 01:40:36.052965  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:36.052974  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:36.052991  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:36.122317  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:36.113883   11700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:36.114412   11700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:36.115894   11700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:36.116408   11700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:36.118260   11700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:36.113883   11700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:36.114412   11700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:36.115894   11700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:36.116408   11700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:36.118260   11700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:36.122340  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:36.122353  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:36.146907  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:36.146940  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:36.174411  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:36.174444  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:36.229229  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:36.229259  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:38.742843  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:38.753061  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:38.753132  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:38.777998  291455 cri.go:89] found id: ""
	I1212 01:40:38.778024  291455 logs.go:282] 0 containers: []
	W1212 01:40:38.778033  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:38.778039  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:38.778098  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:38.819601  291455 cri.go:89] found id: ""
	I1212 01:40:38.819630  291455 logs.go:282] 0 containers: []
	W1212 01:40:38.819639  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:38.819649  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:38.819726  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:38.863492  291455 cri.go:89] found id: ""
	I1212 01:40:38.863555  291455 logs.go:282] 0 containers: []
	W1212 01:40:38.863567  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:38.863574  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:38.863640  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:38.896081  291455 cri.go:89] found id: ""
	I1212 01:40:38.896109  291455 logs.go:282] 0 containers: []
	W1212 01:40:38.896118  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:38.896124  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:38.896189  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:38.923782  291455 cri.go:89] found id: ""
	I1212 01:40:38.923824  291455 logs.go:282] 0 containers: []
	W1212 01:40:38.923832  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:38.923838  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:38.923896  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:38.948257  291455 cri.go:89] found id: ""
	I1212 01:40:38.948289  291455 logs.go:282] 0 containers: []
	W1212 01:40:38.948305  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:38.948312  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:38.948379  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:38.974066  291455 cri.go:89] found id: ""
	I1212 01:40:38.974090  291455 logs.go:282] 0 containers: []
	W1212 01:40:38.974098  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:38.974104  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:38.974163  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:38.999566  291455 cri.go:89] found id: ""
	I1212 01:40:38.999654  291455 logs.go:282] 0 containers: []
	W1212 01:40:38.999670  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:38.999681  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:38.999693  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:39.032809  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:39.032845  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:39.061204  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:39.061234  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:39.116485  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:39.116516  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:39.129984  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:39.130014  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:39.195391  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:39.187100   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:39.187857   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:39.189545   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:39.190069   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:39.191706   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:39.187100   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:39.187857   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:39.189545   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:39.190069   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:39.191706   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:41.695676  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:41.707011  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:41.707085  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:41.731224  291455 cri.go:89] found id: ""
	I1212 01:40:41.731295  291455 logs.go:282] 0 containers: []
	W1212 01:40:41.731318  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:41.731337  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:41.731422  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:41.759193  291455 cri.go:89] found id: ""
	I1212 01:40:41.759266  291455 logs.go:282] 0 containers: []
	W1212 01:40:41.759289  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:41.759308  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:41.759394  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:41.793923  291455 cri.go:89] found id: ""
	I1212 01:40:41.793994  291455 logs.go:282] 0 containers: []
	W1212 01:40:41.794017  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:41.794038  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:41.794121  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:41.844183  291455 cri.go:89] found id: ""
	I1212 01:40:41.844246  291455 logs.go:282] 0 containers: []
	W1212 01:40:41.844277  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:41.844297  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:41.844405  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:41.880181  291455 cri.go:89] found id: ""
	I1212 01:40:41.880253  291455 logs.go:282] 0 containers: []
	W1212 01:40:41.880288  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:41.880312  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:41.880412  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:41.908685  291455 cri.go:89] found id: ""
	I1212 01:40:41.908760  291455 logs.go:282] 0 containers: []
	W1212 01:40:41.908776  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:41.908783  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:41.908840  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:41.933232  291455 cri.go:89] found id: ""
	I1212 01:40:41.933257  291455 logs.go:282] 0 containers: []
	W1212 01:40:41.933265  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:41.933272  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:41.933361  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:41.957941  291455 cri.go:89] found id: ""
	I1212 01:40:41.957966  291455 logs.go:282] 0 containers: []
	W1212 01:40:41.957975  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:41.957993  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:41.958004  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:42.012839  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:42.012878  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:42.028378  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:42.028410  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:42.099435  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:42.089806   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:42.091059   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:42.092313   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:42.093469   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:42.094522   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:42.089806   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:42.091059   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:42.092313   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:42.093469   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:42.094522   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:42.099461  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:42.099477  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:42.127956  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:42.127997  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:44.666695  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:44.677340  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:44.677417  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:44.701562  291455 cri.go:89] found id: ""
	I1212 01:40:44.701585  291455 logs.go:282] 0 containers: []
	W1212 01:40:44.701594  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:44.701600  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:44.701657  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:44.726430  291455 cri.go:89] found id: ""
	I1212 01:40:44.726452  291455 logs.go:282] 0 containers: []
	W1212 01:40:44.726460  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:44.726466  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:44.726555  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:44.755275  291455 cri.go:89] found id: ""
	I1212 01:40:44.755298  291455 logs.go:282] 0 containers: []
	W1212 01:40:44.755306  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:44.755312  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:44.755367  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:44.780079  291455 cri.go:89] found id: ""
	I1212 01:40:44.780105  291455 logs.go:282] 0 containers: []
	W1212 01:40:44.780114  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:44.780120  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:44.780194  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:44.869405  291455 cri.go:89] found id: ""
	I1212 01:40:44.869429  291455 logs.go:282] 0 containers: []
	W1212 01:40:44.869437  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:44.869444  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:44.869510  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:44.895160  291455 cri.go:89] found id: ""
	I1212 01:40:44.895186  291455 logs.go:282] 0 containers: []
	W1212 01:40:44.895195  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:44.895201  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:44.895258  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:44.919698  291455 cri.go:89] found id: ""
	I1212 01:40:44.919721  291455 logs.go:282] 0 containers: []
	W1212 01:40:44.919730  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:44.919736  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:44.919792  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:44.944054  291455 cri.go:89] found id: ""
	I1212 01:40:44.944076  291455 logs.go:282] 0 containers: []
	W1212 01:40:44.944085  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:44.944093  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:44.944104  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:44.968670  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:44.968701  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:44.997722  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:44.997750  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:45.076118  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:45.076163  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:45.092613  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:45.092646  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:45.185594  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:45.175075   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:45.176253   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:45.177119   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:45.179849   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:45.180652   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:45.175075   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:45.176253   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:45.177119   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:45.179849   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:45.180652   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:47.686812  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:47.697462  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:47.697534  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:47.725301  291455 cri.go:89] found id: ""
	I1212 01:40:47.725327  291455 logs.go:282] 0 containers: []
	W1212 01:40:47.725336  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:47.725342  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:47.725406  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:47.750015  291455 cri.go:89] found id: ""
	I1212 01:40:47.750040  291455 logs.go:282] 0 containers: []
	W1212 01:40:47.750050  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:47.750057  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:47.750116  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:47.774576  291455 cri.go:89] found id: ""
	I1212 01:40:47.774604  291455 logs.go:282] 0 containers: []
	W1212 01:40:47.774613  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:47.774620  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:47.774679  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:47.823337  291455 cri.go:89] found id: ""
	I1212 01:40:47.823365  291455 logs.go:282] 0 containers: []
	W1212 01:40:47.823374  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:47.823381  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:47.823451  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:47.863754  291455 cri.go:89] found id: ""
	I1212 01:40:47.863776  291455 logs.go:282] 0 containers: []
	W1212 01:40:47.863785  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:47.863791  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:47.863851  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:47.892358  291455 cri.go:89] found id: ""
	I1212 01:40:47.892383  291455 logs.go:282] 0 containers: []
	W1212 01:40:47.892391  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:47.892398  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:47.892463  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:47.916778  291455 cri.go:89] found id: ""
	I1212 01:40:47.916805  291455 logs.go:282] 0 containers: []
	W1212 01:40:47.916815  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:47.916821  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:47.916900  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:47.942154  291455 cri.go:89] found id: ""
	I1212 01:40:47.942177  291455 logs.go:282] 0 containers: []
	W1212 01:40:47.942185  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:47.942194  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:47.942208  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:47.955644  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:47.955725  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:48.027299  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:48.016837   12155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:48.017641   12155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:48.019747   12155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:48.020636   12155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:48.022832   12155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:48.016837   12155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:48.017641   12155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:48.019747   12155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:48.020636   12155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:48.022832   12155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:48.027326  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:48.027340  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:48.052933  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:48.052970  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:48.089641  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:48.089674  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:50.649196  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:50.660069  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:50.660143  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:50.685271  291455 cri.go:89] found id: ""
	I1212 01:40:50.685299  291455 logs.go:282] 0 containers: []
	W1212 01:40:50.685309  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:50.685316  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:50.685378  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:50.712999  291455 cri.go:89] found id: ""
	I1212 01:40:50.713025  291455 logs.go:282] 0 containers: []
	W1212 01:40:50.713034  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:50.713040  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:50.713099  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:50.737720  291455 cri.go:89] found id: ""
	I1212 01:40:50.737745  291455 logs.go:282] 0 containers: []
	W1212 01:40:50.737754  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:50.737761  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:50.737828  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:50.763261  291455 cri.go:89] found id: ""
	I1212 01:40:50.763286  291455 logs.go:282] 0 containers: []
	W1212 01:40:50.763294  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:50.763300  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:50.763358  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:50.811665  291455 cri.go:89] found id: ""
	I1212 01:40:50.811692  291455 logs.go:282] 0 containers: []
	W1212 01:40:50.811701  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:50.811707  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:50.811768  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:50.870884  291455 cri.go:89] found id: ""
	I1212 01:40:50.870909  291455 logs.go:282] 0 containers: []
	W1212 01:40:50.870921  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:50.870927  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:50.870986  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:50.896362  291455 cri.go:89] found id: ""
	I1212 01:40:50.896387  291455 logs.go:282] 0 containers: []
	W1212 01:40:50.896395  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:50.896401  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:50.896457  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:50.924933  291455 cri.go:89] found id: ""
	I1212 01:40:50.924956  291455 logs.go:282] 0 containers: []
	W1212 01:40:50.924964  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:50.924974  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:50.924986  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:50.982505  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:50.982537  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:50.996444  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:50.996467  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:51.075810  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:51.067309   12269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:51.068132   12269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:51.069797   12269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:51.070277   12269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:51.071678   12269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:51.067309   12269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:51.068132   12269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:51.069797   12269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:51.070277   12269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:51.071678   12269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:51.075896  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:51.075929  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:51.100541  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:51.100577  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:53.629887  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:53.640204  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:53.640274  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:53.665408  291455 cri.go:89] found id: ""
	I1212 01:40:53.665487  291455 logs.go:282] 0 containers: []
	W1212 01:40:53.665511  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:53.665531  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:53.665616  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:53.693593  291455 cri.go:89] found id: ""
	I1212 01:40:53.693620  291455 logs.go:282] 0 containers: []
	W1212 01:40:53.693629  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:53.693635  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:53.693693  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:53.717209  291455 cri.go:89] found id: ""
	I1212 01:40:53.717234  291455 logs.go:282] 0 containers: []
	W1212 01:40:53.717243  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:53.717249  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:53.717305  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:53.742008  291455 cri.go:89] found id: ""
	I1212 01:40:53.742033  291455 logs.go:282] 0 containers: []
	W1212 01:40:53.742042  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:53.742049  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:53.742106  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:53.766463  291455 cri.go:89] found id: ""
	I1212 01:40:53.766489  291455 logs.go:282] 0 containers: []
	W1212 01:40:53.766498  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:53.766505  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:53.766562  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:53.832090  291455 cri.go:89] found id: ""
	I1212 01:40:53.832118  291455 logs.go:282] 0 containers: []
	W1212 01:40:53.832133  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:53.832140  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:53.832201  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:53.877395  291455 cri.go:89] found id: ""
	I1212 01:40:53.877422  291455 logs.go:282] 0 containers: []
	W1212 01:40:53.877431  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:53.877438  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:53.877497  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:53.905857  291455 cri.go:89] found id: ""
	I1212 01:40:53.905883  291455 logs.go:282] 0 containers: []
	W1212 01:40:53.905891  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:53.905900  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:53.905912  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:53.936211  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:53.936236  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:53.990768  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:53.990801  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:54.005707  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:54.005751  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:54.077323  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:54.068912   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:54.069627   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:54.071278   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:54.071804   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:54.073346   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:54.068912   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:54.069627   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:54.071278   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:54.071804   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:54.073346   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:54.077345  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:54.077361  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:56.603783  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:56.614362  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:56.614437  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:56.639205  291455 cri.go:89] found id: ""
	I1212 01:40:56.639230  291455 logs.go:282] 0 containers: []
	W1212 01:40:56.639239  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:56.639245  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:56.639302  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:56.664961  291455 cri.go:89] found id: ""
	I1212 01:40:56.664983  291455 logs.go:282] 0 containers: []
	W1212 01:40:56.664991  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:56.664997  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:56.665055  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:56.689125  291455 cri.go:89] found id: ""
	I1212 01:40:56.689148  291455 logs.go:282] 0 containers: []
	W1212 01:40:56.689163  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:56.689169  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:56.689228  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:56.713944  291455 cri.go:89] found id: ""
	I1212 01:40:56.713969  291455 logs.go:282] 0 containers: []
	W1212 01:40:56.713977  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:56.713984  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:56.714045  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:56.742503  291455 cri.go:89] found id: ""
	I1212 01:40:56.742536  291455 logs.go:282] 0 containers: []
	W1212 01:40:56.742546  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:56.742552  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:56.742610  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:56.768074  291455 cri.go:89] found id: ""
	I1212 01:40:56.768101  291455 logs.go:282] 0 containers: []
	W1212 01:40:56.768110  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:56.768116  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:56.768176  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:56.822219  291455 cri.go:89] found id: ""
	I1212 01:40:56.822241  291455 logs.go:282] 0 containers: []
	W1212 01:40:56.822250  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:56.822256  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:56.822326  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:56.877551  291455 cri.go:89] found id: ""
	I1212 01:40:56.877579  291455 logs.go:282] 0 containers: []
	W1212 01:40:56.877588  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:56.877598  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:56.877609  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:56.951400  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:56.942725   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:56.943403   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:56.945223   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:56.945864   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:56.947463   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:56.942725   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:56.943403   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:56.945223   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:56.945864   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:56.947463   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:56.951423  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:56.951435  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:56.976432  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:56.976471  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:57.016067  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:57.016095  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:57.076530  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:57.076562  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:59.590650  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:59.601442  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:59.601513  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:59.627392  291455 cri.go:89] found id: ""
	I1212 01:40:59.627418  291455 logs.go:282] 0 containers: []
	W1212 01:40:59.627426  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:59.627433  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:59.627492  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:59.652525  291455 cri.go:89] found id: ""
	I1212 01:40:59.652546  291455 logs.go:282] 0 containers: []
	W1212 01:40:59.652555  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:59.652560  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:59.652620  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:59.677515  291455 cri.go:89] found id: ""
	I1212 01:40:59.677538  291455 logs.go:282] 0 containers: []
	W1212 01:40:59.677546  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:59.677551  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:59.677609  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:59.701508  291455 cri.go:89] found id: ""
	I1212 01:40:59.701531  291455 logs.go:282] 0 containers: []
	W1212 01:40:59.701539  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:59.701545  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:59.701602  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:59.726132  291455 cri.go:89] found id: ""
	I1212 01:40:59.726154  291455 logs.go:282] 0 containers: []
	W1212 01:40:59.726162  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:59.726168  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:59.726228  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:59.751581  291455 cri.go:89] found id: ""
	I1212 01:40:59.751608  291455 logs.go:282] 0 containers: []
	W1212 01:40:59.751617  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:59.751625  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:59.751682  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:59.780780  291455 cri.go:89] found id: ""
	I1212 01:40:59.780805  291455 logs.go:282] 0 containers: []
	W1212 01:40:59.780825  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:59.780836  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:59.780901  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:59.866401  291455 cri.go:89] found id: ""
	I1212 01:40:59.866424  291455 logs.go:282] 0 containers: []
	W1212 01:40:59.866433  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:59.866442  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:59.866453  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:59.921825  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:59.921862  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:59.935338  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:59.935366  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:59.999474  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:59.992159   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:59.992558   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:59.993995   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:59.994293   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:59.995686   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:59.992159   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:59.992558   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:59.993995   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:59.994293   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:59.995686   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:59.999546  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:59.999574  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:41:00.079868  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:41:00.084769  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:41:02.719157  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:41:02.730262  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:41:02.730335  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:41:02.756172  291455 cri.go:89] found id: ""
	I1212 01:41:02.756196  291455 logs.go:282] 0 containers: []
	W1212 01:41:02.756206  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:41:02.756213  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:41:02.756272  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:41:02.792420  291455 cri.go:89] found id: ""
	I1212 01:41:02.792445  291455 logs.go:282] 0 containers: []
	W1212 01:41:02.792455  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:41:02.792461  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:41:02.792531  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:41:02.838813  291455 cri.go:89] found id: ""
	I1212 01:41:02.838841  291455 logs.go:282] 0 containers: []
	W1212 01:41:02.838849  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:41:02.838856  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:41:02.838918  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:41:02.886478  291455 cri.go:89] found id: ""
	I1212 01:41:02.886504  291455 logs.go:282] 0 containers: []
	W1212 01:41:02.886513  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:41:02.886523  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:41:02.886580  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:41:02.914286  291455 cri.go:89] found id: ""
	I1212 01:41:02.914309  291455 logs.go:282] 0 containers: []
	W1212 01:41:02.914318  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:41:02.914333  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:41:02.914403  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:41:02.939527  291455 cri.go:89] found id: ""
	I1212 01:41:02.939550  291455 logs.go:282] 0 containers: []
	W1212 01:41:02.939559  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:41:02.939565  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:41:02.939624  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:41:02.965321  291455 cri.go:89] found id: ""
	I1212 01:41:02.965345  291455 logs.go:282] 0 containers: []
	W1212 01:41:02.965354  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:41:02.965360  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:41:02.965423  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:41:02.991292  291455 cri.go:89] found id: ""
	I1212 01:41:02.991316  291455 logs.go:282] 0 containers: []
	W1212 01:41:02.991325  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:41:02.991341  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:41:02.991352  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:41:03.019527  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:41:03.019562  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:41:03.051852  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:41:03.051878  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:41:03.107633  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:41:03.107667  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:41:03.121349  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:41:03.121375  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:41:03.186261  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:41:03.177889   12737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:03.178763   12737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:03.180270   12737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:03.180822   12737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:03.182351   12737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:41:03.177889   12737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:03.178763   12737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:03.180270   12737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:03.180822   12737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:03.182351   12737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:41:05.687947  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:41:05.698808  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:41:05.698883  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:41:05.724019  291455 cri.go:89] found id: ""
	I1212 01:41:05.724043  291455 logs.go:282] 0 containers: []
	W1212 01:41:05.724052  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:41:05.724058  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:41:05.724115  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:41:05.752813  291455 cri.go:89] found id: ""
	I1212 01:41:05.752838  291455 logs.go:282] 0 containers: []
	W1212 01:41:05.752847  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:41:05.752853  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:41:05.752917  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:41:05.777122  291455 cri.go:89] found id: ""
	I1212 01:41:05.777144  291455 logs.go:282] 0 containers: []
	W1212 01:41:05.777152  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:41:05.777158  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:41:05.777215  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:41:05.833235  291455 cri.go:89] found id: ""
	I1212 01:41:05.833260  291455 logs.go:282] 0 containers: []
	W1212 01:41:05.833270  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:41:05.833276  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:41:05.833350  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:41:05.880483  291455 cri.go:89] found id: ""
	I1212 01:41:05.880506  291455 logs.go:282] 0 containers: []
	W1212 01:41:05.880514  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:41:05.880520  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:41:05.880583  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:41:05.904810  291455 cri.go:89] found id: ""
	I1212 01:41:05.904834  291455 logs.go:282] 0 containers: []
	W1212 01:41:05.904843  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:41:05.904849  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:41:05.904906  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:41:05.936458  291455 cri.go:89] found id: ""
	I1212 01:41:05.936482  291455 logs.go:282] 0 containers: []
	W1212 01:41:05.936491  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:41:05.936497  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:41:05.936585  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:41:05.965168  291455 cri.go:89] found id: ""
	I1212 01:41:05.965193  291455 logs.go:282] 0 containers: []
	W1212 01:41:05.965202  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:41:05.965212  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:41:05.965225  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:41:06.022621  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:41:06.022674  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:41:06.036897  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:41:06.036926  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:41:06.105481  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:41:06.097089   12840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:06.097938   12840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:06.099584   12840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:06.099907   12840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:06.101467   12840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:41:06.097089   12840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:06.097938   12840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:06.099584   12840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:06.099907   12840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:06.101467   12840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:41:06.105505  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:41:06.105518  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:41:06.131153  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:41:06.131186  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:41:08.659864  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:41:08.670811  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:41:08.670881  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:41:08.694882  291455 cri.go:89] found id: ""
	I1212 01:41:08.694903  291455 logs.go:282] 0 containers: []
	W1212 01:41:08.694911  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:41:08.694917  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:41:08.694976  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:41:08.719560  291455 cri.go:89] found id: ""
	I1212 01:41:08.719590  291455 logs.go:282] 0 containers: []
	W1212 01:41:08.719598  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:41:08.719605  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:41:08.719662  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:41:08.744076  291455 cri.go:89] found id: ""
	I1212 01:41:08.744103  291455 logs.go:282] 0 containers: []
	W1212 01:41:08.744113  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:41:08.744119  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:41:08.744177  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:41:08.772960  291455 cri.go:89] found id: ""
	I1212 01:41:08.772985  291455 logs.go:282] 0 containers: []
	W1212 01:41:08.772994  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:41:08.773001  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:41:08.773080  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:41:08.815633  291455 cri.go:89] found id: ""
	I1212 01:41:08.815659  291455 logs.go:282] 0 containers: []
	W1212 01:41:08.815668  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:41:08.815674  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:41:08.815742  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:41:08.878320  291455 cri.go:89] found id: ""
	I1212 01:41:08.878345  291455 logs.go:282] 0 containers: []
	W1212 01:41:08.878353  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:41:08.878360  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:41:08.878450  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:41:08.904601  291455 cri.go:89] found id: ""
	I1212 01:41:08.904628  291455 logs.go:282] 0 containers: []
	W1212 01:41:08.904636  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:41:08.904643  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:41:08.904702  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:41:08.929638  291455 cri.go:89] found id: ""
	I1212 01:41:08.929660  291455 logs.go:282] 0 containers: []
	W1212 01:41:08.929668  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:41:08.929678  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:41:08.929689  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:41:08.987700  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:41:08.987732  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:41:09.006748  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:41:09.006844  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:41:09.074571  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:41:09.066680   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:09.067299   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:09.068802   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:09.069203   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:09.070675   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:41:09.066680   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:09.067299   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:09.068802   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:09.069203   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:09.070675   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:41:09.074595  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:41:09.074607  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:41:09.099568  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:41:09.099599  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:41:11.629539  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:41:11.640012  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:41:11.640082  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:41:11.663460  291455 cri.go:89] found id: ""
	I1212 01:41:11.663485  291455 logs.go:282] 0 containers: []
	W1212 01:41:11.663493  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:41:11.663500  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:41:11.663555  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:41:11.686956  291455 cri.go:89] found id: ""
	I1212 01:41:11.686978  291455 logs.go:282] 0 containers: []
	W1212 01:41:11.686986  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:41:11.687088  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:41:11.687150  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:41:11.712890  291455 cri.go:89] found id: ""
	I1212 01:41:11.712913  291455 logs.go:282] 0 containers: []
	W1212 01:41:11.712922  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:41:11.712928  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:41:11.712984  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:41:11.736706  291455 cri.go:89] found id: ""
	I1212 01:41:11.736728  291455 logs.go:282] 0 containers: []
	W1212 01:41:11.736736  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:41:11.736742  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:41:11.736800  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:41:11.759893  291455 cri.go:89] found id: ""
	I1212 01:41:11.759915  291455 logs.go:282] 0 containers: []
	W1212 01:41:11.759923  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:41:11.759929  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:41:11.759986  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:41:11.794524  291455 cri.go:89] found id: ""
	I1212 01:41:11.794548  291455 logs.go:282] 0 containers: []
	W1212 01:41:11.794556  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:41:11.794563  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:41:11.794617  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:41:11.837664  291455 cri.go:89] found id: ""
	I1212 01:41:11.837685  291455 logs.go:282] 0 containers: []
	W1212 01:41:11.837693  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:41:11.837699  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:41:11.837758  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:41:11.876539  291455 cri.go:89] found id: ""
	I1212 01:41:11.876560  291455 logs.go:282] 0 containers: []
	W1212 01:41:11.876568  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:41:11.876576  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:41:11.876588  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:41:11.891935  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:41:11.891958  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:41:11.953883  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:41:11.945499   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:11.946165   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:11.947829   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:11.948378   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:11.949885   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:41:11.945499   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:11.946165   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:11.947829   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:11.948378   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:11.949885   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:41:11.953906  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:41:11.953919  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:41:11.978361  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:41:11.978394  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:41:12.008436  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:41:12.008467  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:41:14.566794  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:41:14.577540  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:41:14.577620  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:41:14.603419  291455 cri.go:89] found id: ""
	I1212 01:41:14.603444  291455 logs.go:282] 0 containers: []
	W1212 01:41:14.603453  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:41:14.603459  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:41:14.603523  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:41:14.627963  291455 cri.go:89] found id: ""
	I1212 01:41:14.627986  291455 logs.go:282] 0 containers: []
	W1212 01:41:14.627994  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:41:14.628000  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:41:14.628064  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:41:14.651989  291455 cri.go:89] found id: ""
	I1212 01:41:14.652014  291455 logs.go:282] 0 containers: []
	W1212 01:41:14.652024  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:41:14.652031  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:41:14.652089  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:41:14.680771  291455 cri.go:89] found id: ""
	I1212 01:41:14.680794  291455 logs.go:282] 0 containers: []
	W1212 01:41:14.680802  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:41:14.680808  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:41:14.680865  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:41:14.705454  291455 cri.go:89] found id: ""
	I1212 01:41:14.705479  291455 logs.go:282] 0 containers: []
	W1212 01:41:14.705488  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:41:14.705494  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:41:14.705553  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:41:14.734181  291455 cri.go:89] found id: ""
	I1212 01:41:14.734207  291455 logs.go:282] 0 containers: []
	W1212 01:41:14.734216  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:41:14.734222  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:41:14.734279  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:41:14.758125  291455 cri.go:89] found id: ""
	I1212 01:41:14.758150  291455 logs.go:282] 0 containers: []
	W1212 01:41:14.758159  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:41:14.758165  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:41:14.758224  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:41:14.796212  291455 cri.go:89] found id: ""
	I1212 01:41:14.796239  291455 logs.go:282] 0 containers: []
	W1212 01:41:14.796248  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:41:14.796257  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:41:14.796268  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:41:14.875942  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:41:14.875982  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:41:14.893694  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:41:14.893723  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:41:14.958664  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:41:14.950439   13182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:14.951146   13182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:14.952867   13182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:14.953336   13182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:14.954860   13182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:41:14.950439   13182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:14.951146   13182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:14.952867   13182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:14.953336   13182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:14.954860   13182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:41:14.958686  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:41:14.958698  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:41:14.983555  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:41:14.983592  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:41:17.522313  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:41:17.532817  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:41:17.532892  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:41:17.560757  291455 cri.go:89] found id: ""
	I1212 01:41:17.560779  291455 logs.go:282] 0 containers: []
	W1212 01:41:17.560788  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:41:17.560795  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:41:17.560851  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:41:17.585702  291455 cri.go:89] found id: ""
	I1212 01:41:17.585725  291455 logs.go:282] 0 containers: []
	W1212 01:41:17.585734  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:41:17.585740  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:41:17.585807  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:41:17.614888  291455 cri.go:89] found id: ""
	I1212 01:41:17.614912  291455 logs.go:282] 0 containers: []
	W1212 01:41:17.614920  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:41:17.614926  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:41:17.614983  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:41:17.640684  291455 cri.go:89] found id: ""
	I1212 01:41:17.640706  291455 logs.go:282] 0 containers: []
	W1212 01:41:17.640714  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:41:17.640721  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:41:17.640781  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:41:17.666504  291455 cri.go:89] found id: ""
	I1212 01:41:17.666529  291455 logs.go:282] 0 containers: []
	W1212 01:41:17.666538  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:41:17.666545  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:41:17.666619  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:41:17.693636  291455 cri.go:89] found id: ""
	I1212 01:41:17.693661  291455 logs.go:282] 0 containers: []
	W1212 01:41:17.693670  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:41:17.693677  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:41:17.693738  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:41:17.718203  291455 cri.go:89] found id: ""
	I1212 01:41:17.718270  291455 logs.go:282] 0 containers: []
	W1212 01:41:17.718310  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:41:17.718337  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:41:17.718430  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:41:17.745520  291455 cri.go:89] found id: ""
	I1212 01:41:17.745544  291455 logs.go:282] 0 containers: []
	W1212 01:41:17.745553  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:41:17.745562  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:41:17.745574  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:41:17.809137  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:41:17.809237  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:41:17.824842  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:41:17.824909  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:41:17.914329  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:41:17.905491   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:17.906027   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:17.907410   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:17.907912   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:17.909473   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:41:17.905491   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:17.906027   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:17.907410   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:17.907912   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:17.909473   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:41:17.914350  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:41:17.914365  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:41:17.939510  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:41:17.939546  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:41:20.466980  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:41:20.480747  291455 out.go:203] 
	W1212 01:41:20.483558  291455 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1212 01:41:20.483596  291455 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1212 01:41:20.483610  291455 out.go:285] * Related issues:
	W1212 01:41:20.483628  291455 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1212 01:41:20.483644  291455 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1212 01:41:20.486471  291455 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.245221319Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.245292023Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.245392938Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.245465406Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.245525534Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.245588713Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.245646150Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.245704997Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.245771016Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.245854791Z" level=info msg="Connect containerd service"
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.246200073Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.246847141Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.263118416Z" level=info msg="Start subscribing containerd event"
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.263340210Z" level=info msg="Start recovering state"
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.263271673Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.264204469Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.302901466Z" level=info msg="Start event monitor"
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.302971940Z" level=info msg="Start cni network conf syncer for default"
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.302983534Z" level=info msg="Start streaming server"
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.303030213Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.303039617Z" level=info msg="runtime interface starting up..."
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.303045705Z" level=info msg="starting plugins..."
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.303228803Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.303400291Z" level=info msg="containerd successfully booted in 0.083333s"
	Dec 12 01:35:17 newest-cni-256959 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:41:29.900787   13800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:29.901447   13800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:29.902925   13800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:29.903680   13800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:29.905162   13800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec11 23:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014465] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.504479] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.038126] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.726220] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.947343] kauditd_printk_skb: 36 callbacks suppressed
	[Dec12 00:40] hrtimer: interrupt took 11339963 ns
	
	
	==> kernel <==
	 01:41:29 up  2:23,  0 user,  load average: 0.57, 0.65, 1.26
	Linux newest-cni-256959 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 01:41:26 newest-cni-256959 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:41:26 newest-cni-256959 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:41:26 newest-cni-256959 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:41:27 newest-cni-256959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:41:27 newest-cni-256959 kubelet[13647]: E1212 01:41:27.470786   13647 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:41:27 newest-cni-256959 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:41:27 newest-cni-256959 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:41:28 newest-cni-256959 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	Dec 12 01:41:28 newest-cni-256959 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:41:28 newest-cni-256959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:41:28 newest-cni-256959 kubelet[13686]: E1212 01:41:28.346838   13686 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:41:28 newest-cni-256959 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:41:28 newest-cni-256959 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:41:29 newest-cni-256959 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
	Dec 12 01:41:29 newest-cni-256959 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:41:29 newest-cni-256959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:41:29 newest-cni-256959 kubelet[13704]: E1212 01:41:29.158403   13704 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:41:29 newest-cni-256959 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:41:29 newest-cni-256959 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:41:29 newest-cni-256959 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
	Dec 12 01:41:29 newest-cni-256959 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:41:29 newest-cni-256959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:41:29 newest-cni-256959 kubelet[13793]: E1212 01:41:29.881729   13793 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:41:29 newest-cni-256959 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:41:29 newest-cni-256959 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-256959 -n newest-cni-256959
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-256959 -n newest-cni-256959: exit status 2 (325.74781ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-256959" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-256959
helpers_test.go:244: (dbg) docker inspect newest-cni-256959:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "361f9c16c44aa15a36ece6d69f387f89ceb140e0cb337e6926bbb4b89286930b",
	        "Created": "2025-12-12T01:25:15.433462291Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 291584,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T01:35:11.599618298Z",
	            "FinishedAt": "2025-12-12T01:35:10.241180563Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/361f9c16c44aa15a36ece6d69f387f89ceb140e0cb337e6926bbb4b89286930b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/361f9c16c44aa15a36ece6d69f387f89ceb140e0cb337e6926bbb4b89286930b/hostname",
	        "HostsPath": "/var/lib/docker/containers/361f9c16c44aa15a36ece6d69f387f89ceb140e0cb337e6926bbb4b89286930b/hosts",
	        "LogPath": "/var/lib/docker/containers/361f9c16c44aa15a36ece6d69f387f89ceb140e0cb337e6926bbb4b89286930b/361f9c16c44aa15a36ece6d69f387f89ceb140e0cb337e6926bbb4b89286930b-json.log",
	        "Name": "/newest-cni-256959",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-256959:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-256959",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "361f9c16c44aa15a36ece6d69f387f89ceb140e0cb337e6926bbb4b89286930b",
	                "LowerDir": "/var/lib/docker/overlay2/46aaa938e663ba5fb2a5ebb73f99ff9b7f70d7fe540cd86b216b975394456017-init/diff:/var/lib/docker/overlay2/4c0e5370e4fd7b4e6c6a79620ef377d7d55826709cd277e0cfa49c6005af0314/diff",
	                "MergedDir": "/var/lib/docker/overlay2/46aaa938e663ba5fb2a5ebb73f99ff9b7f70d7fe540cd86b216b975394456017/merged",
	                "UpperDir": "/var/lib/docker/overlay2/46aaa938e663ba5fb2a5ebb73f99ff9b7f70d7fe540cd86b216b975394456017/diff",
	                "WorkDir": "/var/lib/docker/overlay2/46aaa938e663ba5fb2a5ebb73f99ff9b7f70d7fe540cd86b216b975394456017/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-256959",
	                "Source": "/var/lib/docker/volumes/newest-cni-256959/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-256959",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-256959",
	                "name.minikube.sigs.k8s.io": "newest-cni-256959",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "345adc76212ae94224c61dd049e472f16ee67ee027a331e11cdf648a15dff74a",
	            "SandboxKey": "/var/run/docker/netns/345adc76212a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-256959": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:19:c4:dc:e5:59",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "08d9e23f02a4d7730d420d79f658bc1854aa3d62ee2a54a8cd34a455b2ba0431",
	                    "EndpointID": "e780ab70cd5a9e96f54f2a272324b26b9e51bece9b706db46ac5aff93fb5ac56",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-256959",
	                        "361f9c16c44a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-256959 -n newest-cni-256959
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-256959 -n newest-cni-256959: exit status 2 (345.731423ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-256959 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-256959 logs -n 25: (1.616608032s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ delete  │ -p default-k8s-diff-port-971096                                                                                                                                                                                                                            │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ delete  │ -p default-k8s-diff-port-971096                                                                                                                                                                                                                            │ default-k8s-diff-port-971096 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ delete  │ -p disable-driver-mounts-539158                                                                                                                                                                                                                            │ disable-driver-mounts-539158 │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │ 12 Dec 25 01:22 UTC │
	│ start   │ -p no-preload-361053 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-361053            │ jenkins │ v1.37.0 │ 12 Dec 25 01:22 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-648696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ stop    │ -p embed-certs-648696 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ addons  │ enable dashboard -p embed-certs-648696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:23 UTC │
	│ start   │ -p embed-certs-648696 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:23 UTC │ 12 Dec 25 01:24 UTC │
	│ image   │ embed-certs-648696 image list --format=json                                                                                                                                                                                                                │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ pause   │ -p embed-certs-648696 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ unpause │ -p embed-certs-648696 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ delete  │ -p embed-certs-648696                                                                                                                                                                                                                                      │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ delete  │ -p embed-certs-648696                                                                                                                                                                                                                                      │ embed-certs-648696           │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │ 12 Dec 25 01:25 UTC │
	│ start   │ -p newest-cni-256959 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-256959            │ jenkins │ v1.37.0 │ 12 Dec 25 01:25 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-361053 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-361053            │ jenkins │ v1.37.0 │ 12 Dec 25 01:31 UTC │                     │
	│ stop    │ -p no-preload-361053 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-361053            │ jenkins │ v1.37.0 │ 12 Dec 25 01:33 UTC │ 12 Dec 25 01:33 UTC │
	│ addons  │ enable dashboard -p no-preload-361053 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-361053            │ jenkins │ v1.37.0 │ 12 Dec 25 01:33 UTC │ 12 Dec 25 01:33 UTC │
	│ start   │ -p no-preload-361053 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-361053            │ jenkins │ v1.37.0 │ 12 Dec 25 01:33 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-256959 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-256959            │ jenkins │ v1.37.0 │ 12 Dec 25 01:33 UTC │                     │
	│ stop    │ -p newest-cni-256959 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-256959            │ jenkins │ v1.37.0 │ 12 Dec 25 01:35 UTC │ 12 Dec 25 01:35 UTC │
	│ addons  │ enable dashboard -p newest-cni-256959 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-256959            │ jenkins │ v1.37.0 │ 12 Dec 25 01:35 UTC │ 12 Dec 25 01:35 UTC │
	│ start   │ -p newest-cni-256959 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-256959            │ jenkins │ v1.37.0 │ 12 Dec 25 01:35 UTC │                     │
	│ image   │ newest-cni-256959 image list --format=json                                                                                                                                                                                                                 │ newest-cni-256959            │ jenkins │ v1.37.0 │ 12 Dec 25 01:41 UTC │ 12 Dec 25 01:41 UTC │
	│ pause   │ -p newest-cni-256959 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-256959            │ jenkins │ v1.37.0 │ 12 Dec 25 01:41 UTC │ 12 Dec 25 01:41 UTC │
	│ unpause │ -p newest-cni-256959 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-256959            │ jenkins │ v1.37.0 │ 12 Dec 25 01:41 UTC │ 12 Dec 25 01:41 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 01:35:11
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 01:35:11.336080  291455 out.go:360] Setting OutFile to fd 1 ...
	I1212 01:35:11.336277  291455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:35:11.336290  291455 out.go:374] Setting ErrFile to fd 2...
	I1212 01:35:11.336296  291455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:35:11.336566  291455 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	I1212 01:35:11.336950  291455 out.go:368] Setting JSON to false
	I1212 01:35:11.337843  291455 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8258,"bootTime":1765495054,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1212 01:35:11.337913  291455 start.go:143] virtualization:  
	I1212 01:35:11.341103  291455 out.go:179] * [newest-cni-256959] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 01:35:11.345273  291455 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 01:35:11.345376  291455 notify.go:221] Checking for updates...
	I1212 01:35:11.351231  291455 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 01:35:11.354134  291455 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 01:35:11.357086  291455 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	I1212 01:35:11.359981  291455 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 01:35:11.363090  291455 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 01:35:11.366381  291455 config.go:182] Loaded profile config "newest-cni-256959": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 01:35:11.367076  291455 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 01:35:11.397719  291455 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 01:35:11.397845  291455 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:35:11.450218  291455 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 01:35:11.441400779 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:35:11.450324  291455 docker.go:319] overlay module found
	I1212 01:35:11.453495  291455 out.go:179] * Using the docker driver based on existing profile
	I1212 01:35:11.456257  291455 start.go:309] selected driver: docker
	I1212 01:35:11.456272  291455 start.go:927] validating driver "docker" against &{Name:newest-cni-256959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:35:11.456385  291455 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 01:35:11.457105  291455 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:35:11.512167  291455 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 01:35:11.503270098 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:35:11.512501  291455 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1212 01:35:11.512533  291455 cni.go:84] Creating CNI manager for ""
	I1212 01:35:11.512581  291455 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 01:35:11.512620  291455 start.go:353] cluster config:
	{Name:newest-cni-256959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:35:11.517595  291455 out.go:179] * Starting "newest-cni-256959" primary control-plane node in "newest-cni-256959" cluster
	I1212 01:35:11.520355  291455 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1212 01:35:11.523510  291455 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 01:35:11.526310  291455 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 01:35:11.526350  291455 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1212 01:35:11.526380  291455 cache.go:65] Caching tarball of preloaded images
	I1212 01:35:11.526401  291455 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 01:35:11.526463  291455 preload.go:238] Found /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1212 01:35:11.526474  291455 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1212 01:35:11.526577  291455 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/config.json ...
	I1212 01:35:11.545949  291455 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 01:35:11.545972  291455 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 01:35:11.545990  291455 cache.go:243] Successfully downloaded all kic artifacts
	I1212 01:35:11.546021  291455 start.go:360] acquireMachinesLock for newest-cni-256959: {Name:mke4c35c218ad59b1da2c46074b57e71134fc7be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:35:11.546106  291455 start.go:364] duration metric: took 61.449µs to acquireMachinesLock for "newest-cni-256959"
	I1212 01:35:11.546128  291455 start.go:96] Skipping create...Using existing machine configuration
	I1212 01:35:11.546140  291455 fix.go:54] fixHost starting: 
	I1212 01:35:11.546394  291455 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Status}}
	I1212 01:35:11.562986  291455 fix.go:112] recreateIfNeeded on newest-cni-256959: state=Stopped err=<nil>
	W1212 01:35:11.563044  291455 fix.go:138] unexpected machine state, will restart: <nil>
	W1212 01:35:12.535792  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:35:12.641222  287206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:35:12.704850  287206 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:35:12.704951  287206 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 01:35:12.708213  287206 out.go:179] * Enabled addons: 
	I1212 01:35:12.711265  287206 addons.go:530] duration metric: took 1m55.054971797s for enable addons: enabled=[]
	W1212 01:35:14.536558  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:35:11.566225  291455 out.go:252] * Restarting existing docker container for "newest-cni-256959" ...
	I1212 01:35:11.566307  291455 cli_runner.go:164] Run: docker start newest-cni-256959
	I1212 01:35:11.824711  291455 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Status}}
	I1212 01:35:11.850549  291455 kic.go:430] container "newest-cni-256959" state is running.
	I1212 01:35:11.850948  291455 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-256959
	I1212 01:35:11.874496  291455 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/config.json ...
	I1212 01:35:11.875491  291455 machine.go:94] provisionDockerMachine start ...
	I1212 01:35:11.875566  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:11.904543  291455 main.go:143] libmachine: Using SSH client type: native
	I1212 01:35:11.904867  291455 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1212 01:35:11.904894  291455 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 01:35:11.905649  291455 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 01:35:15.062841  291455 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-256959
	
	I1212 01:35:15.062884  291455 ubuntu.go:182] provisioning hostname "newest-cni-256959"
	I1212 01:35:15.062966  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:15.081374  291455 main.go:143] libmachine: Using SSH client type: native
	I1212 01:35:15.081715  291455 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1212 01:35:15.081732  291455 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-256959 && echo "newest-cni-256959" | sudo tee /etc/hostname
	I1212 01:35:15.244594  291455 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-256959
	
	I1212 01:35:15.244717  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:15.262885  291455 main.go:143] libmachine: Using SSH client type: native
	I1212 01:35:15.263226  291455 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1212 01:35:15.263249  291455 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-256959' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-256959/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-256959' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:35:15.415381  291455 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:35:15.415407  291455 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-2343/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-2343/.minikube}
	I1212 01:35:15.415450  291455 ubuntu.go:190] setting up certificates
	I1212 01:35:15.415469  291455 provision.go:84] configureAuth start
	I1212 01:35:15.415542  291455 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-256959
	I1212 01:35:15.432184  291455 provision.go:143] copyHostCerts
	I1212 01:35:15.432260  291455 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem, removing ...
	I1212 01:35:15.432274  291455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem
	I1212 01:35:15.432771  291455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem (1082 bytes)
	I1212 01:35:15.432891  291455 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem, removing ...
	I1212 01:35:15.432905  291455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem
	I1212 01:35:15.432935  291455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem (1123 bytes)
	I1212 01:35:15.433008  291455 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem, removing ...
	I1212 01:35:15.433018  291455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem
	I1212 01:35:15.433044  291455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem (1675 bytes)
	I1212 01:35:15.433100  291455 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem org=jenkins.newest-cni-256959 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-256959]
	I1212 01:35:15.664957  291455 provision.go:177] copyRemoteCerts
	I1212 01:35:15.665025  291455 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:35:15.665084  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:15.682010  291455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:35:15.786690  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 01:35:15.804464  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 01:35:15.821597  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 01:35:15.838753  291455 provision.go:87] duration metric: took 423.263374ms to configureAuth
	I1212 01:35:15.838782  291455 ubuntu.go:206] setting minikube options for container-runtime
	I1212 01:35:15.839040  291455 config.go:182] Loaded profile config "newest-cni-256959": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 01:35:15.839053  291455 machine.go:97] duration metric: took 3.963544394s to provisionDockerMachine
	I1212 01:35:15.839061  291455 start.go:293] postStartSetup for "newest-cni-256959" (driver="docker")
	I1212 01:35:15.839072  291455 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:35:15.839119  291455 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:35:15.839169  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:15.855712  291455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:35:15.959303  291455 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:35:15.962341  291455 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 01:35:15.962368  291455 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 01:35:15.962380  291455 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/addons for local assets ...
	I1212 01:35:15.962429  291455 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/files for local assets ...
	I1212 01:35:15.962509  291455 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem -> 42902.pem in /etc/ssl/certs
	I1212 01:35:15.962609  291455 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:35:15.969472  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /etc/ssl/certs/42902.pem (1708 bytes)
	I1212 01:35:15.986194  291455 start.go:296] duration metric: took 147.119175ms for postStartSetup
	I1212 01:35:15.986304  291455 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 01:35:15.986375  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:16.005019  291455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:35:16.107859  291455 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 01:35:16.112663  291455 fix.go:56] duration metric: took 4.566516262s for fixHost
	I1212 01:35:16.112691  291455 start.go:83] releasing machines lock for "newest-cni-256959", held for 4.566573288s
	I1212 01:35:16.112760  291455 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-256959
	I1212 01:35:16.129477  291455 ssh_runner.go:195] Run: cat /version.json
	I1212 01:35:16.129531  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:16.129775  291455 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:35:16.129824  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:16.153158  291455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:35:16.155921  291455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:35:16.367474  291455 ssh_runner.go:195] Run: systemctl --version
	I1212 01:35:16.373832  291455 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:35:16.378022  291455 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:35:16.378104  291455 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:35:16.385747  291455 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 01:35:16.385772  291455 start.go:496] detecting cgroup driver to use...
	I1212 01:35:16.385819  291455 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 01:35:16.385882  291455 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 01:35:16.403657  291455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 01:35:16.417469  291455 docker.go:218] disabling cri-docker service (if available) ...
	I1212 01:35:16.417564  291455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:35:16.433612  291455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:35:16.446861  291455 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:35:16.554018  291455 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:35:16.672193  291455 docker.go:234] disabling docker service ...
	I1212 01:35:16.672283  291455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:35:16.687238  291455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:35:16.700659  291455 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:35:16.812563  291455 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:35:16.928270  291455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:35:16.941185  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:35:16.957067  291455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 01:35:16.966276  291455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 01:35:16.975221  291455 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 01:35:16.975292  291455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 01:35:16.984294  291455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 01:35:16.993328  291455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 01:35:17.004796  291455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 01:35:17.015289  291455 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:35:17.023922  291455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 01:35:17.036658  291455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 01:35:17.046732  291455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 01:35:17.056354  291455 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:35:17.064063  291455 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:35:17.071833  291455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:35:17.188012  291455 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 01:35:17.306110  291455 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1212 01:35:17.306231  291455 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1212 01:35:17.309882  291455 start.go:564] Will wait 60s for crictl version
	I1212 01:35:17.309968  291455 ssh_runner.go:195] Run: which crictl
	I1212 01:35:17.313475  291455 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 01:35:17.340045  291455 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1212 01:35:17.340140  291455 ssh_runner.go:195] Run: containerd --version
	I1212 01:35:17.360301  291455 ssh_runner.go:195] Run: containerd --version
	I1212 01:35:17.385714  291455 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1212 01:35:17.388490  291455 cli_runner.go:164] Run: docker network inspect newest-cni-256959 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 01:35:17.404979  291455 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1212 01:35:17.409350  291455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:35:17.422610  291455 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1212 01:35:17.425426  291455 kubeadm.go:884] updating cluster {Name:newest-cni-256959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:35:17.425578  291455 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1212 01:35:17.425675  291455 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:35:17.450191  291455 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 01:35:17.450217  291455 containerd.go:534] Images already preloaded, skipping extraction
	I1212 01:35:17.450277  291455 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:35:17.474185  291455 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 01:35:17.474220  291455 cache_images.go:86] Images are preloaded, skipping loading
	I1212 01:35:17.474228  291455 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1212 01:35:17.474373  291455 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-256959 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:35:17.474472  291455 ssh_runner.go:195] Run: sudo crictl info
	I1212 01:35:17.498662  291455 cni.go:84] Creating CNI manager for ""
	I1212 01:35:17.498685  291455 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1212 01:35:17.498869  291455 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1212 01:35:17.498905  291455 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-256959 NodeName:newest-cni-256959 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 01:35:17.499182  291455 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-256959"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:35:17.499276  291455 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 01:35:17.511920  291455 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 01:35:17.512017  291455 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:35:17.519602  291455 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1212 01:35:17.532107  291455 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 01:35:17.545262  291455 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1212 01:35:17.557618  291455 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1212 01:35:17.561053  291455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:35:17.570894  291455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:35:17.675958  291455 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:35:17.692695  291455 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959 for IP: 192.168.76.2
	I1212 01:35:17.692715  291455 certs.go:195] generating shared ca certs ...
	I1212 01:35:17.692750  291455 certs.go:227] acquiring lock for ca certs: {Name:mk18ed2fce74cbc4ee01c0f71e2dbdd98ccce1cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:35:17.692911  291455 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key
	I1212 01:35:17.692980  291455 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key
	I1212 01:35:17.692995  291455 certs.go:257] generating profile certs ...
	I1212 01:35:17.693112  291455 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/client.key
	I1212 01:35:17.693202  291455 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.key.b05ecb93
	I1212 01:35:17.693309  291455 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.key
	I1212 01:35:17.693447  291455 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem (1338 bytes)
	W1212 01:35:17.693518  291455 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290_empty.pem, impossibly tiny 0 bytes
	I1212 01:35:17.693536  291455 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 01:35:17.693582  291455 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem (1082 bytes)
	I1212 01:35:17.693632  291455 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:35:17.693666  291455 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem (1675 bytes)
	I1212 01:35:17.693747  291455 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem (1708 bytes)
	I1212 01:35:17.694397  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:35:17.712974  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 01:35:17.738035  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:35:17.758905  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:35:17.776423  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 01:35:17.805243  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 01:35:17.826665  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:35:17.847012  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/newest-cni-256959/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 01:35:17.868946  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:35:17.887272  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem --> /usr/share/ca-certificates/4290.pem (1338 bytes)
	I1212 01:35:17.904023  291455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /usr/share/ca-certificates/42902.pem (1708 bytes)
	I1212 01:35:17.920802  291455 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:35:17.933645  291455 ssh_runner.go:195] Run: openssl version
	I1212 01:35:17.939797  291455 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:35:17.946909  291455 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 01:35:17.954537  291455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:35:17.958217  291455 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:35:17.958301  291455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:35:17.998878  291455 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 01:35:18.008093  291455 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4290.pem
	I1212 01:35:18.016725  291455 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4290.pem /etc/ssl/certs/4290.pem
	I1212 01:35:18.025237  291455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4290.pem
	I1212 01:35:18.029387  291455 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:06 /usr/share/ca-certificates/4290.pem
	I1212 01:35:18.029458  291455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4290.pem
	I1212 01:35:18.072423  291455 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 01:35:18.080329  291455 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42902.pem
	I1212 01:35:18.088043  291455 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42902.pem /etc/ssl/certs/42902.pem
	I1212 01:35:18.095703  291455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42902.pem
	I1212 01:35:18.100065  291455 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:06 /usr/share/ca-certificates/42902.pem
	I1212 01:35:18.100135  291455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42902.pem
	I1212 01:35:18.141016  291455 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 01:35:18.148423  291455 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:35:18.152541  291455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 01:35:18.195372  291455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 01:35:18.236073  291455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 01:35:18.276924  291455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 01:35:18.317697  291455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 01:35:18.358213  291455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 01:35:18.400083  291455 kubeadm.go:401] StartCluster: {Name:newest-cni-256959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256959 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:35:18.400177  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1212 01:35:18.400236  291455 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:35:18.437669  291455 cri.go:89] found id: ""
	I1212 01:35:18.437744  291455 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:35:18.446134  291455 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 01:35:18.446156  291455 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 01:35:18.446208  291455 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 01:35:18.453928  291455 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 01:35:18.454522  291455 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-256959" does not appear in /home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 01:35:18.454766  291455 kubeconfig.go:62] /home/jenkins/minikube-integration/22101-2343/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-256959" cluster setting kubeconfig missing "newest-cni-256959" context setting]
	I1212 01:35:18.455226  291455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/kubeconfig: {Name:mk3e2cbffa089f0a26351f5f6a49b8ed3b66edd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:35:18.456674  291455 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 01:35:18.464597  291455 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1212 01:35:18.464630  291455 kubeadm.go:602] duration metric: took 18.46826ms to restartPrimaryControlPlane
	I1212 01:35:18.464640  291455 kubeadm.go:403] duration metric: took 64.568702ms to StartCluster
	I1212 01:35:18.464656  291455 settings.go:142] acquiring lock: {Name:mk6dd4250df69aeba4752e9f33aeef37272375c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:35:18.464716  291455 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 01:35:18.465619  291455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/kubeconfig: {Name:mk3e2cbffa089f0a26351f5f6a49b8ed3b66edd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:35:18.465827  291455 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1212 01:35:18.466211  291455 config.go:182] Loaded profile config "newest-cni-256959": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 01:35:18.466236  291455 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 01:35:18.466355  291455 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-256959"
	I1212 01:35:18.466367  291455 addons.go:70] Setting dashboard=true in profile "newest-cni-256959"
	I1212 01:35:18.466371  291455 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-256959"
	I1212 01:35:18.466378  291455 addons.go:239] Setting addon dashboard=true in "newest-cni-256959"
	W1212 01:35:18.466385  291455 addons.go:248] addon dashboard should already be in state true
	I1212 01:35:18.466396  291455 host.go:66] Checking if "newest-cni-256959" exists ...
	I1212 01:35:18.466403  291455 host.go:66] Checking if "newest-cni-256959" exists ...
	I1212 01:35:18.466836  291455 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Status}}
	I1212 01:35:18.466869  291455 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Status}}
	I1212 01:35:18.467337  291455 addons.go:70] Setting default-storageclass=true in profile "newest-cni-256959"
	I1212 01:35:18.467363  291455 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-256959"
	I1212 01:35:18.467641  291455 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Status}}
	I1212 01:35:18.469758  291455 out.go:179] * Verifying Kubernetes components...
	I1212 01:35:18.473053  291455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:35:18.505578  291455 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:35:18.507992  291455 addons.go:239] Setting addon default-storageclass=true in "newest-cni-256959"
	I1212 01:35:18.508032  291455 host.go:66] Checking if "newest-cni-256959" exists ...
	I1212 01:35:18.508443  291455 cli_runner.go:164] Run: docker container inspect newest-cni-256959 --format={{.State.Status}}
	I1212 01:35:18.515343  291455 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:35:18.515364  291455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 01:35:18.515428  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:18.518345  291455 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1212 01:35:18.523100  291455 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1212 01:35:17.036393  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:19.036650  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:35:18.525972  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1212 01:35:18.526002  291455 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1212 01:35:18.526079  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:18.564602  291455 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 01:35:18.564630  291455 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 01:35:18.564700  291455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256959
	I1212 01:35:18.565404  291455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:35:18.592490  291455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:35:18.614974  291455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/newest-cni-256959/id_rsa Username:docker}
	I1212 01:35:18.707284  291455 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:35:18.738514  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:35:18.783779  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1212 01:35:18.783804  291455 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1212 01:35:18.797813  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:35:18.817201  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1212 01:35:18.817275  291455 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1212 01:35:18.834247  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1212 01:35:18.834268  291455 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1212 01:35:18.850261  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1212 01:35:18.850281  291455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1212 01:35:18.864878  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1212 01:35:18.864902  291455 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1212 01:35:18.879989  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1212 01:35:18.880012  291455 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1212 01:35:18.893252  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1212 01:35:18.893275  291455 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1212 01:35:18.906457  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1212 01:35:18.906522  291455 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1212 01:35:18.919410  291455 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 01:35:18.919484  291455 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1212 01:35:18.931957  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 01:35:19.295481  291455 api_server.go:52] waiting for apiserver process to appear ...
	W1212 01:35:19.295638  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.295690  291455 retry.go:31] will retry after 249.842732ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:35:19.295768  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.295783  291455 retry.go:31] will retry after 351.420897ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:35:19.296118  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.296142  291455 retry.go:31] will retry after 281.426587ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.296213  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:19.546048  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:35:19.578494  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:35:19.622946  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.623064  291455 retry.go:31] will retry after 277.166543ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.648375  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:35:19.656309  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.656406  291455 retry.go:31] will retry after 462.607475ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:35:19.715463  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.715506  291455 retry.go:31] will retry after 556.232924ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.796674  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:19.900383  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:35:19.963236  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:19.963266  291455 retry.go:31] will retry after 505.253944ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.119589  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:35:20.186519  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.186613  291455 retry.go:31] will retry after 424.835438ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.272893  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:35:20.296648  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:20.336051  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.336183  291455 retry.go:31] will retry after 483.909657ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.469348  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:35:20.528062  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.528096  291455 retry.go:31] will retry after 804.643976ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.612336  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:35:20.682501  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.682548  291455 retry.go:31] will retry after 558.97301ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.795783  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:20.820454  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:35:20.905698  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:20.905732  291455 retry.go:31] will retry after 695.755311ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:21.242222  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 01:35:21.295663  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:21.312788  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:21.312824  291455 retry.go:31] will retry after 1.866088371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:21.333223  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:35:21.536481  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:23.536603  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:21.395495  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:21.395527  291455 retry.go:31] will retry after 1.442265452s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:21.601699  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:35:21.661918  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:21.661958  291455 retry.go:31] will retry after 965.923553ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:21.796193  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:22.296596  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:22.628164  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:35:22.689983  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:22.690024  291455 retry.go:31] will retry after 2.419076287s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:22.796215  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:22.838490  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:35:22.896567  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:22.896595  291455 retry.go:31] will retry after 1.026441386s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:23.180088  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:35:23.242606  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:23.242641  291455 retry.go:31] will retry after 1.447175367s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:23.295985  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:23.795677  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:23.924269  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:35:23.999262  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:23.999301  291455 retry.go:31] will retry after 3.676300513s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:24.295744  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:24.690891  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:35:24.751142  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:24.751178  291455 retry.go:31] will retry after 2.523379824s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:24.796474  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:25.109290  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:35:25.170081  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:25.170117  291455 retry.go:31] will retry after 1.61445699s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:25.296317  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:25.796411  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:26.295885  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:26.036848  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:28.536033  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:35:26.784844  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:35:26.796101  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:26.910864  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:26.910893  291455 retry.go:31] will retry after 5.25056634s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:27.275356  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 01:35:27.295815  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:27.348749  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:27.348785  291455 retry.go:31] will retry after 4.97523733s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:27.676221  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:35:27.738144  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:27.738177  291455 retry.go:31] will retry after 5.096436926s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:27.796329  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:28.296194  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:28.795721  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:29.296646  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:29.795689  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:30.295694  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:30.796607  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:31.296202  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:30.536109  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:32.536508  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:35.036562  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:35:31.795914  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:32.161653  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:35:32.223763  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:32.223796  291455 retry.go:31] will retry after 3.268815276s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:32.296204  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:32.325119  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:35:32.386121  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:32.386153  291455 retry.go:31] will retry after 5.854435808s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:32.796226  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:32.834968  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:35:32.909984  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:32.910017  291455 retry.go:31] will retry after 7.163447884s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:33.296541  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:33.796667  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:34.295628  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:34.796652  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:35.295756  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:35.493366  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:35:35.556021  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:35.556054  291455 retry.go:31] will retry after 12.955659755s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:35.796356  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:36.296236  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:37.036788  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:39.536591  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:35:36.796391  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:37.295746  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:37.795722  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:38.241525  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1212 01:35:38.295983  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:38.315189  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:38.315224  291455 retry.go:31] will retry after 8.402358708s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:38.795800  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:39.296313  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:39.795769  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:40.074570  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:35:40.142371  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:40.142407  291455 retry.go:31] will retry after 11.797804339s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:40.295684  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:40.795715  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:41.295800  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:42.035934  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:44.036480  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:35:41.796201  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:42.295677  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:42.795870  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:43.296206  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:43.795818  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:44.295727  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:44.795706  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:45.296501  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:45.795731  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:46.296084  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:46.536110  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:48.536515  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:35:46.717860  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:35:46.778291  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:46.778324  291455 retry.go:31] will retry after 11.640937008s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:46.796419  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:47.296365  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:47.796242  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:48.295728  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:48.512617  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:35:48.620306  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:48.620334  291455 retry.go:31] will retry after 20.936993287s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:48.795684  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:49.296228  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:49.796588  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:50.296351  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:50.796261  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:51.296609  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:50.536753  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:53.036546  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:35:51.796731  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:51.941351  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:35:52.001637  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:52.001682  291455 retry.go:31] will retry after 15.364088557s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:52.296092  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:52.795636  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:53.296512  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:53.811922  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:54.295780  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:54.795777  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:55.296163  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:55.796273  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:56.295752  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:35:55.535981  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:57.536499  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:35:59.536582  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:35:56.795693  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:57.295887  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:57.796459  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:58.296209  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:58.419661  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:35:58.488403  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:58.488438  291455 retry.go:31] will retry after 29.791340434s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:35:58.796698  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:59.295744  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:35:59.796477  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:00.295794  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:00.795759  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:01.296237  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:36:02.036574  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:04.036717  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:01.796304  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:02.296424  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:02.795750  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:03.296298  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:03.796668  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:04.296158  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:04.796345  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:05.296665  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:05.796526  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:06.295717  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:36:06.536543  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:09.036693  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:06.795806  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:07.296383  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:07.366524  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:36:07.433303  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:36:07.433335  291455 retry.go:31] will retry after 21.959421138s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:36:07.795756  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:08.296562  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:08.795685  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:09.295744  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:09.558068  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:36:09.643748  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:36:09.643785  291455 retry.go:31] will retry after 31.140330108s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:36:09.796018  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:10.295683  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:10.795744  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:11.295780  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:36:11.536613  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:13.536774  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:11.795645  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:12.295717  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:12.795762  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:13.296234  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:13.795775  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:14.296543  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:14.796297  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:15.295763  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:15.795884  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:16.296551  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 01:36:16.036849  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:18.536512  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:16.796640  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:17.295760  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:17.796208  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:18.296641  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:18.795858  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:18.795946  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:18.819559  291455 cri.go:89] found id: ""
	I1212 01:36:18.819585  291455 logs.go:282] 0 containers: []
	W1212 01:36:18.819594  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:18.819605  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:18.819671  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:18.843419  291455 cri.go:89] found id: ""
	I1212 01:36:18.843444  291455 logs.go:282] 0 containers: []
	W1212 01:36:18.843453  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:18.843459  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:18.843524  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:18.867870  291455 cri.go:89] found id: ""
	I1212 01:36:18.867894  291455 logs.go:282] 0 containers: []
	W1212 01:36:18.867903  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:18.867910  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:18.867975  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:18.892504  291455 cri.go:89] found id: ""
	I1212 01:36:18.892528  291455 logs.go:282] 0 containers: []
	W1212 01:36:18.892536  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:18.892543  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:18.892614  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:18.916462  291455 cri.go:89] found id: ""
	I1212 01:36:18.916484  291455 logs.go:282] 0 containers: []
	W1212 01:36:18.916493  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:18.916499  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:18.916555  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:18.940793  291455 cri.go:89] found id: ""
	I1212 01:36:18.940818  291455 logs.go:282] 0 containers: []
	W1212 01:36:18.940827  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:18.940833  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:18.940892  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:18.965485  291455 cri.go:89] found id: ""
	I1212 01:36:18.965513  291455 logs.go:282] 0 containers: []
	W1212 01:36:18.965521  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:18.965527  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:18.965585  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:18.990141  291455 cri.go:89] found id: ""
	I1212 01:36:18.990170  291455 logs.go:282] 0 containers: []
	W1212 01:36:18.990179  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:18.990189  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:18.990202  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:19.044826  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:19.044860  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:19.058338  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:19.058373  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:19.121541  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:19.113010    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:19.113711    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:19.115490    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:19.116077    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:19.117640    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:19.113010    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:19.113711    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:19.115490    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:19.116077    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:19.117640    1851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:19.121602  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:19.121622  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:19.146904  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:19.146941  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:36:21.036609  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:23.536552  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:21.678937  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:21.689641  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:21.689710  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:21.722833  291455 cri.go:89] found id: ""
	I1212 01:36:21.722854  291455 logs.go:282] 0 containers: []
	W1212 01:36:21.722862  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:21.722869  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:21.722926  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:21.747286  291455 cri.go:89] found id: ""
	I1212 01:36:21.747323  291455 logs.go:282] 0 containers: []
	W1212 01:36:21.747339  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:21.747346  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:21.747417  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:21.771941  291455 cri.go:89] found id: ""
	I1212 01:36:21.771965  291455 logs.go:282] 0 containers: []
	W1212 01:36:21.771980  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:21.771987  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:21.772052  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:21.801075  291455 cri.go:89] found id: ""
	I1212 01:36:21.801104  291455 logs.go:282] 0 containers: []
	W1212 01:36:21.801113  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:21.801119  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:21.801176  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:21.825561  291455 cri.go:89] found id: ""
	I1212 01:36:21.825587  291455 logs.go:282] 0 containers: []
	W1212 01:36:21.825595  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:21.825601  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:21.825659  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:21.854532  291455 cri.go:89] found id: ""
	I1212 01:36:21.854559  291455 logs.go:282] 0 containers: []
	W1212 01:36:21.854569  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:21.854580  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:21.854640  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:21.879725  291455 cri.go:89] found id: ""
	I1212 01:36:21.879789  291455 logs.go:282] 0 containers: []
	W1212 01:36:21.879814  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:21.879828  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:21.879912  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:21.904405  291455 cri.go:89] found id: ""
	I1212 01:36:21.904428  291455 logs.go:282] 0 containers: []
	W1212 01:36:21.904437  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:21.904446  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:21.904487  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:21.970611  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:21.962223    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:21.962657    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:21.964375    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:21.964860    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:21.966282    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:21.962223    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:21.962657    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:21.964375    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:21.964860    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:21.966282    1956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:21.970642  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:21.970659  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:21.995425  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:21.995463  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:22.024736  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:22.024767  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:22.082740  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:22.082785  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:24.597828  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:24.608497  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:24.608573  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:24.633951  291455 cri.go:89] found id: ""
	I1212 01:36:24.633978  291455 logs.go:282] 0 containers: []
	W1212 01:36:24.633986  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:24.633992  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:24.634048  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:24.658904  291455 cri.go:89] found id: ""
	I1212 01:36:24.658929  291455 logs.go:282] 0 containers: []
	W1212 01:36:24.658937  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:24.658944  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:24.659026  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:24.683684  291455 cri.go:89] found id: ""
	I1212 01:36:24.683709  291455 logs.go:282] 0 containers: []
	W1212 01:36:24.683718  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:24.683724  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:24.683791  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:24.708745  291455 cri.go:89] found id: ""
	I1212 01:36:24.708770  291455 logs.go:282] 0 containers: []
	W1212 01:36:24.708779  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:24.708786  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:24.708842  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:24.733454  291455 cri.go:89] found id: ""
	I1212 01:36:24.733479  291455 logs.go:282] 0 containers: []
	W1212 01:36:24.733488  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:24.733494  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:24.733551  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:24.761862  291455 cri.go:89] found id: ""
	I1212 01:36:24.761889  291455 logs.go:282] 0 containers: []
	W1212 01:36:24.761898  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:24.761904  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:24.761961  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:24.785388  291455 cri.go:89] found id: ""
	I1212 01:36:24.785415  291455 logs.go:282] 0 containers: []
	W1212 01:36:24.785424  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:24.785430  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:24.785486  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:24.810681  291455 cri.go:89] found id: ""
	I1212 01:36:24.810707  291455 logs.go:282] 0 containers: []
	W1212 01:36:24.810717  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:24.810727  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:24.810743  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:24.865711  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:24.865752  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:24.880399  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:24.880431  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:24.943187  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:24.935391    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:24.936083    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:24.937614    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:24.937904    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:24.939457    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:24.935391    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:24.936083    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:24.937614    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:24.937904    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:24.939457    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:24.943253  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:24.943274  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:24.967790  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:24.967820  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:36:26.036483  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:28.036687  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:30.036781  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:27.495634  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:27.506605  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:27.506700  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:27.548836  291455 cri.go:89] found id: ""
	I1212 01:36:27.548864  291455 logs.go:282] 0 containers: []
	W1212 01:36:27.548873  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:27.548879  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:27.548953  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:27.600295  291455 cri.go:89] found id: ""
	I1212 01:36:27.600324  291455 logs.go:282] 0 containers: []
	W1212 01:36:27.600334  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:27.600340  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:27.600397  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:27.625951  291455 cri.go:89] found id: ""
	I1212 01:36:27.625979  291455 logs.go:282] 0 containers: []
	W1212 01:36:27.625987  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:27.625993  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:27.626062  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:27.651635  291455 cri.go:89] found id: ""
	I1212 01:36:27.651660  291455 logs.go:282] 0 containers: []
	W1212 01:36:27.651668  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:27.651675  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:27.651734  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:27.676415  291455 cri.go:89] found id: ""
	I1212 01:36:27.676437  291455 logs.go:282] 0 containers: []
	W1212 01:36:27.676446  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:27.676473  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:27.676535  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:27.699845  291455 cri.go:89] found id: ""
	I1212 01:36:27.699868  291455 logs.go:282] 0 containers: []
	W1212 01:36:27.699876  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:27.699883  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:27.699938  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:27.735327  291455 cri.go:89] found id: ""
	I1212 01:36:27.735353  291455 logs.go:282] 0 containers: []
	W1212 01:36:27.735362  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:27.735368  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:27.735428  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:27.759909  291455 cri.go:89] found id: ""
	I1212 01:36:27.759932  291455 logs.go:282] 0 containers: []
	W1212 01:36:27.759940  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:27.759950  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:27.759961  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:27.786638  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:27.786667  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:27.841026  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:27.841058  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:27.854475  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:27.854508  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:27.917832  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:27.909374    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:27.909866    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:27.911432    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:27.911952    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:27.913437    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:27.909374    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:27.909866    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:27.911432    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:27.911952    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:27.913437    2200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:27.917855  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:27.917867  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:28.286241  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:36:28.389245  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:36:28.389279  291455 retry.go:31] will retry after 46.053342505s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:36:29.393036  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:36:29.455460  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:36:29.455496  291455 retry.go:31] will retry after 47.570792587s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 01:36:30.443136  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:30.453668  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:30.453743  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:30.480117  291455 cri.go:89] found id: ""
	I1212 01:36:30.480141  291455 logs.go:282] 0 containers: []
	W1212 01:36:30.480149  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:30.480155  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:30.480214  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:30.505432  291455 cri.go:89] found id: ""
	I1212 01:36:30.505460  291455 logs.go:282] 0 containers: []
	W1212 01:36:30.505470  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:30.505478  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:30.505543  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:30.530571  291455 cri.go:89] found id: ""
	I1212 01:36:30.530598  291455 logs.go:282] 0 containers: []
	W1212 01:36:30.530608  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:30.530614  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:30.530675  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:30.587393  291455 cri.go:89] found id: ""
	I1212 01:36:30.587429  291455 logs.go:282] 0 containers: []
	W1212 01:36:30.587439  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:30.587445  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:30.587517  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:30.631827  291455 cri.go:89] found id: ""
	I1212 01:36:30.631894  291455 logs.go:282] 0 containers: []
	W1212 01:36:30.631917  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:30.631941  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:30.632019  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:30.655968  291455 cri.go:89] found id: ""
	I1212 01:36:30.656043  291455 logs.go:282] 0 containers: []
	W1212 01:36:30.656065  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:30.656077  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:30.656143  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:30.680079  291455 cri.go:89] found id: ""
	I1212 01:36:30.680101  291455 logs.go:282] 0 containers: []
	W1212 01:36:30.680110  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:30.680116  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:30.680175  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:30.704249  291455 cri.go:89] found id: ""
	I1212 01:36:30.704324  291455 logs.go:282] 0 containers: []
	W1212 01:36:30.704346  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:30.704365  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:30.704391  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:30.760587  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:30.760620  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:30.774118  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:30.774145  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:30.838730  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:30.831029    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:30.831642    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:30.833120    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:30.833546    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:30.835035    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:30.831029    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:30.831642    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:30.833120    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:30.833546    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:30.835035    2315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:30.838753  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:30.838765  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:30.863650  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:30.863684  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:36:32.039431  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:34.536636  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:33.391024  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:33.401417  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:33.401486  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:33.425243  291455 cri.go:89] found id: ""
	I1212 01:36:33.425265  291455 logs.go:282] 0 containers: []
	W1212 01:36:33.425274  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:33.425280  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:33.425337  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:33.451769  291455 cri.go:89] found id: ""
	I1212 01:36:33.451792  291455 logs.go:282] 0 containers: []
	W1212 01:36:33.451800  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:33.451806  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:33.451869  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:33.476935  291455 cri.go:89] found id: ""
	I1212 01:36:33.476960  291455 logs.go:282] 0 containers: []
	W1212 01:36:33.476968  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:33.476974  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:33.477035  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:33.502755  291455 cri.go:89] found id: ""
	I1212 01:36:33.502781  291455 logs.go:282] 0 containers: []
	W1212 01:36:33.502796  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:33.502802  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:33.502859  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:33.528810  291455 cri.go:89] found id: ""
	I1212 01:36:33.528835  291455 logs.go:282] 0 containers: []
	W1212 01:36:33.528844  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:33.528851  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:33.528915  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:33.559119  291455 cri.go:89] found id: ""
	I1212 01:36:33.559197  291455 logs.go:282] 0 containers: []
	W1212 01:36:33.559219  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:33.559237  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:33.559321  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:33.624518  291455 cri.go:89] found id: ""
	I1212 01:36:33.624547  291455 logs.go:282] 0 containers: []
	W1212 01:36:33.624556  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:33.624562  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:33.624620  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:33.657379  291455 cri.go:89] found id: ""
	I1212 01:36:33.657401  291455 logs.go:282] 0 containers: []
	W1212 01:36:33.657409  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:33.657418  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:33.657428  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:33.713396  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:33.713430  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:33.727420  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:33.727450  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:33.796759  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:33.788822    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:33.789567    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:33.791169    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:33.791683    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:33.792822    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:33.788822    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:33.789567    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:33.791169    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:33.791683    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:33.792822    2428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:33.796782  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:33.796795  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:33.822210  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:33.822246  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:36:37.036646  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:39.036700  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:36.350581  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:36.361065  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:36.361139  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:36.384625  291455 cri.go:89] found id: ""
	I1212 01:36:36.384647  291455 logs.go:282] 0 containers: []
	W1212 01:36:36.384655  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:36.384661  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:36.384721  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:36.409313  291455 cri.go:89] found id: ""
	I1212 01:36:36.409338  291455 logs.go:282] 0 containers: []
	W1212 01:36:36.409347  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:36.409353  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:36.409414  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:36.437773  291455 cri.go:89] found id: ""
	I1212 01:36:36.437796  291455 logs.go:282] 0 containers: []
	W1212 01:36:36.437804  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:36.437811  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:36.437875  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:36.462058  291455 cri.go:89] found id: ""
	I1212 01:36:36.462080  291455 logs.go:282] 0 containers: []
	W1212 01:36:36.462089  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:36.462096  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:36.462158  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:36.485881  291455 cri.go:89] found id: ""
	I1212 01:36:36.485902  291455 logs.go:282] 0 containers: []
	W1212 01:36:36.485911  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:36.485917  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:36.485973  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:36.510249  291455 cri.go:89] found id: ""
	I1212 01:36:36.510318  291455 logs.go:282] 0 containers: []
	W1212 01:36:36.510340  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:36.510362  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:36.510444  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:36.546913  291455 cri.go:89] found id: ""
	I1212 01:36:36.546948  291455 logs.go:282] 0 containers: []
	W1212 01:36:36.546957  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:36.546963  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:36.547067  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:36.604532  291455 cri.go:89] found id: ""
	I1212 01:36:36.604562  291455 logs.go:282] 0 containers: []
	W1212 01:36:36.604571  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:36.604580  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:36.604593  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:36.684036  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:36.674581    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:36.675420    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:36.677203    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:36.677878    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:36.679666    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:36.674581    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:36.675420    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:36.677203    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:36.677878    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:36.679666    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:36.684061  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:36.684074  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:36.709835  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:36.709866  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:36.737742  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:36.737768  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:36.792829  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:36.792864  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:39.307416  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:39.317852  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:39.317952  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:39.342723  291455 cri.go:89] found id: ""
	I1212 01:36:39.342747  291455 logs.go:282] 0 containers: []
	W1212 01:36:39.342756  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:39.342763  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:39.342821  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:39.367433  291455 cri.go:89] found id: ""
	I1212 01:36:39.367472  291455 logs.go:282] 0 containers: []
	W1212 01:36:39.367485  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:39.367492  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:39.367559  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:39.392871  291455 cri.go:89] found id: ""
	I1212 01:36:39.392896  291455 logs.go:282] 0 containers: []
	W1212 01:36:39.392904  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:39.392911  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:39.392974  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:39.417519  291455 cri.go:89] found id: ""
	I1212 01:36:39.417546  291455 logs.go:282] 0 containers: []
	W1212 01:36:39.417555  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:39.417562  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:39.417621  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:39.441729  291455 cri.go:89] found id: ""
	I1212 01:36:39.441760  291455 logs.go:282] 0 containers: []
	W1212 01:36:39.441769  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:39.441775  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:39.441841  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:39.466118  291455 cri.go:89] found id: ""
	I1212 01:36:39.466147  291455 logs.go:282] 0 containers: []
	W1212 01:36:39.466156  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:39.466163  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:39.466225  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:39.491269  291455 cri.go:89] found id: ""
	I1212 01:36:39.491292  291455 logs.go:282] 0 containers: []
	W1212 01:36:39.491304  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:39.491310  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:39.491375  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:39.515625  291455 cri.go:89] found id: ""
	I1212 01:36:39.515650  291455 logs.go:282] 0 containers: []
	W1212 01:36:39.515659  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:39.515668  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:39.515679  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:39.595337  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:39.595376  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:39.617464  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:39.617500  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:39.698043  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:39.689431    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:39.689924    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:39.691689    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:39.692010    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:39.693641    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:39.689431    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:39.689924    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:39.691689    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:39.692010    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:39.693641    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:39.698068  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:39.698080  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:39.722656  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:39.722692  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:40.784380  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1212 01:36:40.845895  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:36:40.846018  291455 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1212 01:36:41.536608  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:44.036506  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:42.256252  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:42.269504  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:42.269576  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:42.296285  291455 cri.go:89] found id: ""
	I1212 01:36:42.296314  291455 logs.go:282] 0 containers: []
	W1212 01:36:42.296323  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:42.296330  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:42.296393  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:42.324314  291455 cri.go:89] found id: ""
	I1212 01:36:42.324349  291455 logs.go:282] 0 containers: []
	W1212 01:36:42.324366  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:42.324373  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:42.324448  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:42.353000  291455 cri.go:89] found id: ""
	I1212 01:36:42.353024  291455 logs.go:282] 0 containers: []
	W1212 01:36:42.353033  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:42.353039  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:42.353103  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:42.379029  291455 cri.go:89] found id: ""
	I1212 01:36:42.379057  291455 logs.go:282] 0 containers: []
	W1212 01:36:42.379066  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:42.379073  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:42.379141  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:42.404039  291455 cri.go:89] found id: ""
	I1212 01:36:42.404068  291455 logs.go:282] 0 containers: []
	W1212 01:36:42.404077  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:42.404084  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:42.404150  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:42.429848  291455 cri.go:89] found id: ""
	I1212 01:36:42.429877  291455 logs.go:282] 0 containers: []
	W1212 01:36:42.429887  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:42.429893  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:42.429952  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:42.454022  291455 cri.go:89] found id: ""
	I1212 01:36:42.454049  291455 logs.go:282] 0 containers: []
	W1212 01:36:42.454058  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:42.454065  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:42.454126  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:42.481205  291455 cri.go:89] found id: ""
	I1212 01:36:42.481231  291455 logs.go:282] 0 containers: []
	W1212 01:36:42.481240  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:42.481249  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:42.481260  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:42.511373  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:42.511400  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:42.594053  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:42.594092  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:42.613172  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:42.613201  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:42.688118  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:42.678899    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:42.679678    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:42.681197    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:42.681708    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:42.683477    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:42.678899    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:42.679678    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:42.681197    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:42.681708    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:42.683477    2782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:42.688142  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:42.688155  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:45.213644  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:45.234582  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:45.234677  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:45.268686  291455 cri.go:89] found id: ""
	I1212 01:36:45.268715  291455 logs.go:282] 0 containers: []
	W1212 01:36:45.268732  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:45.268741  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:45.268827  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:45.297061  291455 cri.go:89] found id: ""
	I1212 01:36:45.297115  291455 logs.go:282] 0 containers: []
	W1212 01:36:45.297132  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:45.297139  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:45.297272  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:45.324030  291455 cri.go:89] found id: ""
	I1212 01:36:45.324063  291455 logs.go:282] 0 containers: []
	W1212 01:36:45.324072  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:45.324078  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:45.324144  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:45.354569  291455 cri.go:89] found id: ""
	I1212 01:36:45.354595  291455 logs.go:282] 0 containers: []
	W1212 01:36:45.354612  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:45.354619  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:45.354697  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:45.380068  291455 cri.go:89] found id: ""
	I1212 01:36:45.380133  291455 logs.go:282] 0 containers: []
	W1212 01:36:45.380160  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:45.380175  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:45.380249  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:45.403554  291455 cri.go:89] found id: ""
	I1212 01:36:45.403620  291455 logs.go:282] 0 containers: []
	W1212 01:36:45.403643  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:45.403664  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:45.403746  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:45.426534  291455 cri.go:89] found id: ""
	I1212 01:36:45.426560  291455 logs.go:282] 0 containers: []
	W1212 01:36:45.426568  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:45.426574  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:45.426637  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:45.455346  291455 cri.go:89] found id: ""
	I1212 01:36:45.455414  291455 logs.go:282] 0 containers: []
	W1212 01:36:45.455438  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:45.455457  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:45.455469  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:45.510486  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:45.510521  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:45.523916  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:45.523944  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:45.642152  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:45.624680    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:45.625385    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:45.635164    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:45.635878    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:45.637755    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:45.624680    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:45.625385    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:45.635164    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:45.635878    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:45.637755    2877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:45.642173  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:45.642186  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:45.667625  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:45.667661  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:36:46.535816  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:48.537737  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:48.197188  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:48.208199  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:48.208272  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:48.236943  291455 cri.go:89] found id: ""
	I1212 01:36:48.236969  291455 logs.go:282] 0 containers: []
	W1212 01:36:48.236977  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:48.236984  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:48.237048  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:48.262444  291455 cri.go:89] found id: ""
	I1212 01:36:48.262468  291455 logs.go:282] 0 containers: []
	W1212 01:36:48.262477  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:48.262483  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:48.262545  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:48.292262  291455 cri.go:89] found id: ""
	I1212 01:36:48.292292  291455 logs.go:282] 0 containers: []
	W1212 01:36:48.292301  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:48.292307  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:48.292370  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:48.318028  291455 cri.go:89] found id: ""
	I1212 01:36:48.318053  291455 logs.go:282] 0 containers: []
	W1212 01:36:48.318063  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:48.318069  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:48.318128  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:48.343500  291455 cri.go:89] found id: ""
	I1212 01:36:48.343524  291455 logs.go:282] 0 containers: []
	W1212 01:36:48.343532  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:48.343539  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:48.343620  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:48.374537  291455 cri.go:89] found id: ""
	I1212 01:36:48.374563  291455 logs.go:282] 0 containers: []
	W1212 01:36:48.374572  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:48.374578  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:48.374657  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:48.399165  291455 cri.go:89] found id: ""
	I1212 01:36:48.399188  291455 logs.go:282] 0 containers: []
	W1212 01:36:48.399197  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:48.399203  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:48.399265  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:48.424429  291455 cri.go:89] found id: ""
	I1212 01:36:48.424452  291455 logs.go:282] 0 containers: []
	W1212 01:36:48.424460  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:48.424469  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:48.424482  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:48.450297  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:48.450336  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:48.477992  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:48.478017  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:48.533513  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:48.533546  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:48.554972  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:48.555078  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:48.639199  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:48.628523    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:48.629323    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:48.630881    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:48.631460    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:48.634979    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:48.628523    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:48.629323    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:48.630881    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:48.631460    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:48.634979    3007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:51.139443  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:51.152801  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:51.152869  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:51.181036  291455 cri.go:89] found id: ""
	I1212 01:36:51.181060  291455 logs.go:282] 0 containers: []
	W1212 01:36:51.181069  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:51.181076  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:51.181139  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:51.205637  291455 cri.go:89] found id: ""
	I1212 01:36:51.205664  291455 logs.go:282] 0 containers: []
	W1212 01:36:51.205673  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:51.205680  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:51.205744  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:51.230375  291455 cri.go:89] found id: ""
	I1212 01:36:51.230401  291455 logs.go:282] 0 containers: []
	W1212 01:36:51.230410  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:51.230416  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:51.230479  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:51.260594  291455 cri.go:89] found id: ""
	I1212 01:36:51.260620  291455 logs.go:282] 0 containers: []
	W1212 01:36:51.260629  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:51.260636  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:51.260693  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:51.286513  291455 cri.go:89] found id: ""
	I1212 01:36:51.286538  291455 logs.go:282] 0 containers: []
	W1212 01:36:51.286548  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:51.286554  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:51.286613  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:51.320488  291455 cri.go:89] found id: ""
	I1212 01:36:51.320511  291455 logs.go:282] 0 containers: []
	W1212 01:36:51.320519  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:51.320526  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:51.320593  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	W1212 01:36:51.035818  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:53.036491  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:55.036601  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:51.346751  291455 cri.go:89] found id: ""
	I1212 01:36:51.346773  291455 logs.go:282] 0 containers: []
	W1212 01:36:51.346782  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:51.346788  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:51.346848  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:51.372774  291455 cri.go:89] found id: ""
	I1212 01:36:51.372797  291455 logs.go:282] 0 containers: []
	W1212 01:36:51.372805  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:51.372820  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:51.372832  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:51.397287  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:51.397322  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:51.424395  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:51.424423  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:51.484364  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:51.484400  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:51.497751  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:51.497778  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:51.609432  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:51.593650    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:51.595213    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:51.596974    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:51.601995    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:51.602562    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:51.593650    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:51.595213    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:51.596974    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:51.601995    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:51.602562    3114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:54.111055  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:54.123333  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:54.123404  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:54.147152  291455 cri.go:89] found id: ""
	I1212 01:36:54.147218  291455 logs.go:282] 0 containers: []
	W1212 01:36:54.147246  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:54.147268  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:54.147370  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:54.172120  291455 cri.go:89] found id: ""
	I1212 01:36:54.172186  291455 logs.go:282] 0 containers: []
	W1212 01:36:54.172212  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:54.172233  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:54.172318  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:54.199177  291455 cri.go:89] found id: ""
	I1212 01:36:54.199242  291455 logs.go:282] 0 containers: []
	W1212 01:36:54.199262  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:54.199269  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:54.199346  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:54.223691  291455 cri.go:89] found id: ""
	I1212 01:36:54.223716  291455 logs.go:282] 0 containers: []
	W1212 01:36:54.223724  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:54.223731  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:54.223796  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:54.248969  291455 cri.go:89] found id: ""
	I1212 01:36:54.248991  291455 logs.go:282] 0 containers: []
	W1212 01:36:54.249000  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:54.249007  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:54.249076  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:54.274124  291455 cri.go:89] found id: ""
	I1212 01:36:54.274149  291455 logs.go:282] 0 containers: []
	W1212 01:36:54.274158  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:54.274165  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:54.274223  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:54.299049  291455 cri.go:89] found id: ""
	I1212 01:36:54.299071  291455 logs.go:282] 0 containers: []
	W1212 01:36:54.299079  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:54.299085  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:54.299142  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:54.323692  291455 cri.go:89] found id: ""
	I1212 01:36:54.323727  291455 logs.go:282] 0 containers: []
	W1212 01:36:54.323736  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:54.323745  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:54.323757  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:54.337075  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:54.337102  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:54.405905  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:54.396717    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:54.397409    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:54.399032    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:54.399536    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:54.401700    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:54.396717    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:54.397409    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:54.399032    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:54.399536    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:54.401700    3213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:54.405927  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:54.405938  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:54.432446  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:54.432489  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:54.461143  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:54.461170  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1212 01:36:57.536480  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:36:59.536672  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:36:57.017892  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:57.031680  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:57.031754  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:57.058619  291455 cri.go:89] found id: ""
	I1212 01:36:57.058644  291455 logs.go:282] 0 containers: []
	W1212 01:36:57.058661  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:57.058670  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:57.058744  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:57.082470  291455 cri.go:89] found id: ""
	I1212 01:36:57.082496  291455 logs.go:282] 0 containers: []
	W1212 01:36:57.082505  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:57.082511  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:57.082569  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:36:57.107129  291455 cri.go:89] found id: ""
	I1212 01:36:57.107152  291455 logs.go:282] 0 containers: []
	W1212 01:36:57.107161  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:36:57.107174  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:36:57.107235  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:36:57.131240  291455 cri.go:89] found id: ""
	I1212 01:36:57.131264  291455 logs.go:282] 0 containers: []
	W1212 01:36:57.131272  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:36:57.131282  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:36:57.131339  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:36:57.161702  291455 cri.go:89] found id: ""
	I1212 01:36:57.161728  291455 logs.go:282] 0 containers: []
	W1212 01:36:57.161737  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:36:57.161743  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:36:57.161800  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:36:57.186568  291455 cri.go:89] found id: ""
	I1212 01:36:57.186592  291455 logs.go:282] 0 containers: []
	W1212 01:36:57.186601  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:36:57.186607  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:36:57.186724  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:36:57.211286  291455 cri.go:89] found id: ""
	I1212 01:36:57.211310  291455 logs.go:282] 0 containers: []
	W1212 01:36:57.211319  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:36:57.211325  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:36:57.211382  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:36:57.236370  291455 cri.go:89] found id: ""
	I1212 01:36:57.236394  291455 logs.go:282] 0 containers: []
	W1212 01:36:57.236403  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:36:57.236412  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:36:57.236423  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:36:57.292504  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:36:57.292539  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:36:57.306287  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:36:57.306314  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:36:57.369836  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:36:57.361540    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:57.362207    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:57.363914    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:57.364465    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:57.366079    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:36:57.361540    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:57.362207    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:57.363914    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:57.364465    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:36:57.366079    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:36:57.369856  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:36:57.369870  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:36:57.395588  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:36:57.395625  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:36:59.923774  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:36:59.935843  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:36:59.935936  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:36:59.961362  291455 cri.go:89] found id: ""
	I1212 01:36:59.961383  291455 logs.go:282] 0 containers: []
	W1212 01:36:59.961392  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:36:59.961398  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:36:59.961453  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:36:59.987418  291455 cri.go:89] found id: ""
	I1212 01:36:59.987448  291455 logs.go:282] 0 containers: []
	W1212 01:36:59.987458  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:36:59.987463  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:36:59.987521  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:00.083321  291455 cri.go:89] found id: ""
	I1212 01:37:00.083352  291455 logs.go:282] 0 containers: []
	W1212 01:37:00.083362  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:00.083369  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:00.083456  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:00.200170  291455 cri.go:89] found id: ""
	I1212 01:37:00.200535  291455 logs.go:282] 0 containers: []
	W1212 01:37:00.200580  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:00.200686  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:00.201034  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:00.291145  291455 cri.go:89] found id: ""
	I1212 01:37:00.291235  291455 logs.go:282] 0 containers: []
	W1212 01:37:00.291284  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:00.291318  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:00.291414  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:00.393558  291455 cri.go:89] found id: ""
	I1212 01:37:00.393606  291455 logs.go:282] 0 containers: []
	W1212 01:37:00.393618  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:00.393626  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:00.393706  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:00.423985  291455 cri.go:89] found id: ""
	I1212 01:37:00.424023  291455 logs.go:282] 0 containers: []
	W1212 01:37:00.424035  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:00.424041  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:00.424117  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:00.451670  291455 cri.go:89] found id: ""
	I1212 01:37:00.451695  291455 logs.go:282] 0 containers: []
	W1212 01:37:00.451705  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:00.451715  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:00.451728  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:00.509577  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:00.509614  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:00.525099  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:00.525133  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:00.635419  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:00.627409    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:00.628095    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:00.629751    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:00.630057    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:00.631588    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:00.627409    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:00.628095    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:00.629751    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:00.630057    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:00.631588    3445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:00.635455  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:00.635468  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:00.663944  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:00.663984  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:37:02.037994  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:04.536623  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:03.194688  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:03.205352  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:03.205425  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:03.233099  291455 cri.go:89] found id: ""
	I1212 01:37:03.233131  291455 logs.go:282] 0 containers: []
	W1212 01:37:03.233140  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:03.233146  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:03.233217  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:03.257676  291455 cri.go:89] found id: ""
	I1212 01:37:03.257700  291455 logs.go:282] 0 containers: []
	W1212 01:37:03.257710  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:03.257716  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:03.257802  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:03.282622  291455 cri.go:89] found id: ""
	I1212 01:37:03.282696  291455 logs.go:282] 0 containers: []
	W1212 01:37:03.282719  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:03.282739  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:03.282834  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:03.309162  291455 cri.go:89] found id: ""
	I1212 01:37:03.309190  291455 logs.go:282] 0 containers: []
	W1212 01:37:03.309199  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:03.309205  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:03.309265  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:03.334284  291455 cri.go:89] found id: ""
	I1212 01:37:03.334318  291455 logs.go:282] 0 containers: []
	W1212 01:37:03.334327  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:03.334334  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:03.334401  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:03.361255  291455 cri.go:89] found id: ""
	I1212 01:37:03.361281  291455 logs.go:282] 0 containers: []
	W1212 01:37:03.361290  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:03.361296  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:03.361376  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:03.386372  291455 cri.go:89] found id: ""
	I1212 01:37:03.386406  291455 logs.go:282] 0 containers: []
	W1212 01:37:03.386415  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:03.386421  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:03.386490  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:03.412127  291455 cri.go:89] found id: ""
	I1212 01:37:03.412151  291455 logs.go:282] 0 containers: []
	W1212 01:37:03.412160  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:03.412170  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:03.412181  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:03.467933  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:03.467980  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:03.481636  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:03.481663  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:03.565451  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:03.551611    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:03.552450    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:03.553999    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:03.554567    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:03.556109    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:03.551611    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:03.552450    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:03.553999    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:03.554567    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:03.556109    3561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:03.565476  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:03.565548  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:03.614744  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:03.614783  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:06.159160  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:06.169841  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:06.169916  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:06.196496  291455 cri.go:89] found id: ""
	I1212 01:37:06.196521  291455 logs.go:282] 0 containers: []
	W1212 01:37:06.196529  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:06.196536  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:06.196594  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:06.229404  291455 cri.go:89] found id: ""
	I1212 01:37:06.229429  291455 logs.go:282] 0 containers: []
	W1212 01:37:06.229438  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:06.229444  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:06.229505  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:06.254056  291455 cri.go:89] found id: ""
	I1212 01:37:06.254081  291455 logs.go:282] 0 containers: []
	W1212 01:37:06.254089  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:06.254095  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:06.254154  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:06.278424  291455 cri.go:89] found id: ""
	I1212 01:37:06.278453  291455 logs.go:282] 0 containers: []
	W1212 01:37:06.278462  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:06.278469  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:06.278527  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:06.302517  291455 cri.go:89] found id: ""
	I1212 01:37:06.302545  291455 logs.go:282] 0 containers: []
	W1212 01:37:06.302554  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:06.302560  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:06.302617  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:06.328634  291455 cri.go:89] found id: ""
	I1212 01:37:06.328657  291455 logs.go:282] 0 containers: []
	W1212 01:37:06.328665  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:06.328671  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:06.328728  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	W1212 01:37:07.035836  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:09.035916  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:06.352026  291455 cri.go:89] found id: ""
	I1212 01:37:06.352099  291455 logs.go:282] 0 containers: []
	W1212 01:37:06.352115  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:06.352125  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:06.352199  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:06.376075  291455 cri.go:89] found id: ""
	I1212 01:37:06.376101  291455 logs.go:282] 0 containers: []
	W1212 01:37:06.376110  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:06.376119  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:06.376130  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:06.400451  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:06.400481  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:06.428356  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:06.428385  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:06.484230  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:06.484267  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:06.498047  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:06.498074  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:06.610705  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:06.593235    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:06.594305    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:06.599655    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:06.603092    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:06.603422    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:06.593235    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:06.594305    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:06.599655    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:06.603092    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:06.603422    3686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:09.111534  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:09.121786  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:09.121855  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:09.148241  291455 cri.go:89] found id: ""
	I1212 01:37:09.148267  291455 logs.go:282] 0 containers: []
	W1212 01:37:09.148275  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:09.148282  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:09.148341  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:09.172742  291455 cri.go:89] found id: ""
	I1212 01:37:09.172764  291455 logs.go:282] 0 containers: []
	W1212 01:37:09.172773  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:09.172779  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:09.172835  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:09.197560  291455 cri.go:89] found id: ""
	I1212 01:37:09.197586  291455 logs.go:282] 0 containers: []
	W1212 01:37:09.197595  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:09.197601  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:09.197673  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:09.222352  291455 cri.go:89] found id: ""
	I1212 01:37:09.222377  291455 logs.go:282] 0 containers: []
	W1212 01:37:09.222386  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:09.222392  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:09.222450  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:09.246770  291455 cri.go:89] found id: ""
	I1212 01:37:09.246794  291455 logs.go:282] 0 containers: []
	W1212 01:37:09.246802  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:09.246809  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:09.246875  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:09.273237  291455 cri.go:89] found id: ""
	I1212 01:37:09.273260  291455 logs.go:282] 0 containers: []
	W1212 01:37:09.273268  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:09.273275  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:09.273342  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:09.298382  291455 cri.go:89] found id: ""
	I1212 01:37:09.298405  291455 logs.go:282] 0 containers: []
	W1212 01:37:09.298414  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:09.298421  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:09.298479  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:09.326366  291455 cri.go:89] found id: ""
	I1212 01:37:09.326388  291455 logs.go:282] 0 containers: []
	W1212 01:37:09.326396  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:09.326405  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:09.326416  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:09.339892  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:09.339920  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:09.408533  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:09.399583    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:09.400465    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:09.402243    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:09.402860    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:09.404361    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:09.399583    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:09.400465    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:09.402243    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:09.402860    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:09.404361    3784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:09.408555  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:09.408568  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:09.434113  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:09.434149  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:09.469040  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:09.469065  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1212 01:37:11.036562  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:13.536873  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:12.025102  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:12.036649  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:12.036722  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:12.064882  291455 cri.go:89] found id: ""
	I1212 01:37:12.064905  291455 logs.go:282] 0 containers: []
	W1212 01:37:12.064913  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:12.064919  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:12.064979  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:12.090328  291455 cri.go:89] found id: ""
	I1212 01:37:12.090354  291455 logs.go:282] 0 containers: []
	W1212 01:37:12.090362  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:12.090369  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:12.090429  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:12.115640  291455 cri.go:89] found id: ""
	I1212 01:37:12.115665  291455 logs.go:282] 0 containers: []
	W1212 01:37:12.115674  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:12.115680  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:12.115741  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:12.140726  291455 cri.go:89] found id: ""
	I1212 01:37:12.140752  291455 logs.go:282] 0 containers: []
	W1212 01:37:12.140773  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:12.140810  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:12.140900  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:12.165182  291455 cri.go:89] found id: ""
	I1212 01:37:12.165208  291455 logs.go:282] 0 containers: []
	W1212 01:37:12.165216  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:12.165223  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:12.165282  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:12.189365  291455 cri.go:89] found id: ""
	I1212 01:37:12.189389  291455 logs.go:282] 0 containers: []
	W1212 01:37:12.189398  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:12.189405  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:12.189463  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:12.214048  291455 cri.go:89] found id: ""
	I1212 01:37:12.214073  291455 logs.go:282] 0 containers: []
	W1212 01:37:12.214082  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:12.214088  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:12.214148  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:12.240794  291455 cri.go:89] found id: ""
	I1212 01:37:12.240821  291455 logs.go:282] 0 containers: []
	W1212 01:37:12.240830  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:12.240840  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:12.240851  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:12.300894  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:12.300936  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:12.314783  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:12.314817  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:12.382362  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:12.373621    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:12.374371    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:12.376069    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:12.376636    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:12.378249    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:12.373621    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:12.374371    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:12.376069    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:12.376636    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:12.378249    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:12.382385  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:12.382397  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:12.408884  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:12.408921  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:14.444251  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1212 01:37:14.509220  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:37:14.509386  291455 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 01:37:14.942929  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:14.953301  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:14.953373  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:14.977865  291455 cri.go:89] found id: ""
	I1212 01:37:14.977933  291455 logs.go:282] 0 containers: []
	W1212 01:37:14.977947  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:14.977954  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:14.978019  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:15.012296  291455 cri.go:89] found id: ""
	I1212 01:37:15.012325  291455 logs.go:282] 0 containers: []
	W1212 01:37:15.012335  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:15.012342  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:15.012414  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:15.044602  291455 cri.go:89] found id: ""
	I1212 01:37:15.044629  291455 logs.go:282] 0 containers: []
	W1212 01:37:15.044638  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:15.044644  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:15.044705  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:15.072008  291455 cri.go:89] found id: ""
	I1212 01:37:15.072035  291455 logs.go:282] 0 containers: []
	W1212 01:37:15.072043  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:15.072049  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:15.072112  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:15.098264  291455 cri.go:89] found id: ""
	I1212 01:37:15.098293  291455 logs.go:282] 0 containers: []
	W1212 01:37:15.098308  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:15.098316  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:15.098390  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:15.124176  291455 cri.go:89] found id: ""
	I1212 01:37:15.124203  291455 logs.go:282] 0 containers: []
	W1212 01:37:15.124212  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:15.124218  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:15.124278  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:15.148763  291455 cri.go:89] found id: ""
	I1212 01:37:15.148788  291455 logs.go:282] 0 containers: []
	W1212 01:37:15.148797  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:15.148803  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:15.148880  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:15.173843  291455 cri.go:89] found id: ""
	I1212 01:37:15.173870  291455 logs.go:282] 0 containers: []
	W1212 01:37:15.173879  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:15.173889  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:15.173901  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:15.203728  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:15.203757  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:15.259019  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:15.259053  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:15.272480  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:15.272509  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:15.337558  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:15.329071    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:15.329763    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:15.331497    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:15.332089    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:15.333695    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:15.329071    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:15.329763    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:15.331497    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:15.332089    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:15.333695    4035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:15.337580  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:15.337592  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:17.027133  291455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1212 01:37:17.109229  291455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 01:37:17.109319  291455 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 01:37:17.112386  291455 out.go:179] * Enabled addons: 
	W1212 01:37:16.035841  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:18.035966  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:20.036082  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:17.115266  291455 addons.go:530] duration metric: took 1m58.649036473s for enable addons: enabled=[]
	I1212 01:37:17.864277  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:17.875687  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:17.875762  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:17.900504  291455 cri.go:89] found id: ""
	I1212 01:37:17.900527  291455 logs.go:282] 0 containers: []
	W1212 01:37:17.900536  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:17.900542  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:17.900626  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:17.925113  291455 cri.go:89] found id: ""
	I1212 01:37:17.925136  291455 logs.go:282] 0 containers: []
	W1212 01:37:17.925145  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:17.925151  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:17.925238  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:17.950585  291455 cri.go:89] found id: ""
	I1212 01:37:17.950611  291455 logs.go:282] 0 containers: []
	W1212 01:37:17.950620  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:17.950626  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:17.950687  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:17.977787  291455 cri.go:89] found id: ""
	I1212 01:37:17.977813  291455 logs.go:282] 0 containers: []
	W1212 01:37:17.977822  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:17.977828  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:17.977888  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:18.006885  291455 cri.go:89] found id: ""
	I1212 01:37:18.006967  291455 logs.go:282] 0 containers: []
	W1212 01:37:18.007019  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:18.007043  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:18.007118  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:18.033137  291455 cri.go:89] found id: ""
	I1212 01:37:18.033161  291455 logs.go:282] 0 containers: []
	W1212 01:37:18.033170  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:18.033176  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:18.033238  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:18.058968  291455 cri.go:89] found id: ""
	I1212 01:37:18.059009  291455 logs.go:282] 0 containers: []
	W1212 01:37:18.059019  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:18.059025  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:18.059087  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:18.084927  291455 cri.go:89] found id: ""
	I1212 01:37:18.084961  291455 logs.go:282] 0 containers: []
	W1212 01:37:18.084971  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:18.084981  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:18.084994  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:18.153070  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:18.145061    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:18.145891    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:18.147207    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:18.147819    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:18.149000    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:18.145061    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:18.145891    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:18.147207    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:18.147819    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:18.149000    4138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:18.153101  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:18.153113  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:18.178193  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:18.178227  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:18.205844  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:18.205874  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:18.261619  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:18.261657  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:20.775910  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:20.797119  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:20.797192  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:20.870519  291455 cri.go:89] found id: ""
	I1212 01:37:20.870556  291455 logs.go:282] 0 containers: []
	W1212 01:37:20.870566  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:20.870573  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:20.870642  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:20.895021  291455 cri.go:89] found id: ""
	I1212 01:37:20.895044  291455 logs.go:282] 0 containers: []
	W1212 01:37:20.895053  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:20.895059  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:20.895119  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:20.918242  291455 cri.go:89] found id: ""
	I1212 01:37:20.918270  291455 logs.go:282] 0 containers: []
	W1212 01:37:20.918279  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:20.918286  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:20.918340  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:20.942755  291455 cri.go:89] found id: ""
	I1212 01:37:20.942781  291455 logs.go:282] 0 containers: []
	W1212 01:37:20.942790  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:20.942796  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:20.942855  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:20.966487  291455 cri.go:89] found id: ""
	I1212 01:37:20.966551  291455 logs.go:282] 0 containers: []
	W1212 01:37:20.966574  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:20.966595  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:20.966680  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:20.992848  291455 cri.go:89] found id: ""
	I1212 01:37:20.992922  291455 logs.go:282] 0 containers: []
	W1212 01:37:20.992945  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:20.992959  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:20.993035  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:21.025558  291455 cri.go:89] found id: ""
	I1212 01:37:21.025587  291455 logs.go:282] 0 containers: []
	W1212 01:37:21.025596  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:21.025602  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:21.025663  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:21.050967  291455 cri.go:89] found id: ""
	I1212 01:37:21.051023  291455 logs.go:282] 0 containers: []
	W1212 01:37:21.051032  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:21.051041  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:21.051057  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:21.077368  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:21.077396  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:21.133503  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:21.133538  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:21.147218  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:21.147245  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:21.209763  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:21.201479    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:21.202138    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:21.203803    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:21.204409    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:21.205960    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:21.201479    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:21.202138    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:21.203803    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:21.204409    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:21.205960    4273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:21.209786  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:21.209799  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1212 01:37:22.036593  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:24.536591  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:23.737746  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:23.747983  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:23.748051  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:23.772289  291455 cri.go:89] found id: ""
	I1212 01:37:23.772315  291455 logs.go:282] 0 containers: []
	W1212 01:37:23.772333  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:23.772341  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:23.772420  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:23.848280  291455 cri.go:89] found id: ""
	I1212 01:37:23.848306  291455 logs.go:282] 0 containers: []
	W1212 01:37:23.848315  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:23.848322  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:23.848386  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:23.884675  291455 cri.go:89] found id: ""
	I1212 01:37:23.884700  291455 logs.go:282] 0 containers: []
	W1212 01:37:23.884709  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:23.884715  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:23.884777  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:23.914530  291455 cri.go:89] found id: ""
	I1212 01:37:23.914553  291455 logs.go:282] 0 containers: []
	W1212 01:37:23.914561  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:23.914569  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:23.914626  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:23.940203  291455 cri.go:89] found id: ""
	I1212 01:37:23.940275  291455 logs.go:282] 0 containers: []
	W1212 01:37:23.940292  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:23.940299  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:23.940364  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:23.968920  291455 cri.go:89] found id: ""
	I1212 01:37:23.968944  291455 logs.go:282] 0 containers: []
	W1212 01:37:23.968952  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:23.968959  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:23.969016  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:23.993883  291455 cri.go:89] found id: ""
	I1212 01:37:23.993910  291455 logs.go:282] 0 containers: []
	W1212 01:37:23.993919  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:23.993925  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:23.993985  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:24.019876  291455 cri.go:89] found id: ""
	I1212 01:37:24.019901  291455 logs.go:282] 0 containers: []
	W1212 01:37:24.019909  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:24.019922  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:24.019935  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:24.052560  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:24.052586  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:24.107812  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:24.107847  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:24.121870  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:24.121902  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:24.193432  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:24.184434    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:24.184974    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:24.185943    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:24.187426    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:24.187845    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:24.184434    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:24.184974    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:24.185943    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:24.187426    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:24.187845    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:24.193458  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:24.193471  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1212 01:37:26.536664  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:29.036444  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:26.720901  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:26.732114  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:26.732194  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:26.759421  291455 cri.go:89] found id: ""
	I1212 01:37:26.759443  291455 logs.go:282] 0 containers: []
	W1212 01:37:26.759451  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:26.759458  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:26.759523  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:26.801227  291455 cri.go:89] found id: ""
	I1212 01:37:26.801252  291455 logs.go:282] 0 containers: []
	W1212 01:37:26.801261  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:26.801290  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:26.801371  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:26.836143  291455 cri.go:89] found id: ""
	I1212 01:37:26.836168  291455 logs.go:282] 0 containers: []
	W1212 01:37:26.836178  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:26.836184  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:26.836276  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:26.880334  291455 cri.go:89] found id: ""
	I1212 01:37:26.880373  291455 logs.go:282] 0 containers: []
	W1212 01:37:26.880382  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:26.880388  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:26.880477  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:26.915704  291455 cri.go:89] found id: ""
	I1212 01:37:26.915769  291455 logs.go:282] 0 containers: []
	W1212 01:37:26.915786  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:26.915793  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:26.915864  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:26.943219  291455 cri.go:89] found id: ""
	I1212 01:37:26.943252  291455 logs.go:282] 0 containers: []
	W1212 01:37:26.943262  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:26.943269  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:26.943350  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:26.968790  291455 cri.go:89] found id: ""
	I1212 01:37:26.968867  291455 logs.go:282] 0 containers: []
	W1212 01:37:26.968882  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:26.968889  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:26.968946  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:26.993867  291455 cri.go:89] found id: ""
	I1212 01:37:26.993892  291455 logs.go:282] 0 containers: []
	W1212 01:37:26.993908  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:26.993918  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:26.993929  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:27.025483  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:27.025547  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:27.081672  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:27.081704  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:27.095698  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:27.095724  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:27.161161  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:27.151369    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:27.152034    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:27.153696    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:27.156078    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:27.157312    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:27.151369    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:27.152034    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:27.153696    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:27.156078    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:27.157312    4496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:27.161189  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:27.161202  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:29.686768  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:29.699055  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:29.699131  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:29.725025  291455 cri.go:89] found id: ""
	I1212 01:37:29.725050  291455 logs.go:282] 0 containers: []
	W1212 01:37:29.725059  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:29.725065  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:29.725140  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:29.749378  291455 cri.go:89] found id: ""
	I1212 01:37:29.749401  291455 logs.go:282] 0 containers: []
	W1212 01:37:29.749410  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:29.749416  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:29.749481  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:29.773953  291455 cri.go:89] found id: ""
	I1212 01:37:29.773978  291455 logs.go:282] 0 containers: []
	W1212 01:37:29.773987  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:29.773993  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:29.774052  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:29.831695  291455 cri.go:89] found id: ""
	I1212 01:37:29.831723  291455 logs.go:282] 0 containers: []
	W1212 01:37:29.831732  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:29.831738  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:29.831794  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:29.881376  291455 cri.go:89] found id: ""
	I1212 01:37:29.881401  291455 logs.go:282] 0 containers: []
	W1212 01:37:29.881412  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:29.881418  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:29.881477  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:29.905463  291455 cri.go:89] found id: ""
	I1212 01:37:29.905497  291455 logs.go:282] 0 containers: []
	W1212 01:37:29.905506  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:29.905530  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:29.905618  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:29.929393  291455 cri.go:89] found id: ""
	I1212 01:37:29.929427  291455 logs.go:282] 0 containers: []
	W1212 01:37:29.929436  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:29.929442  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:29.929507  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:29.956794  291455 cri.go:89] found id: ""
	I1212 01:37:29.956820  291455 logs.go:282] 0 containers: []
	W1212 01:37:29.956829  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:29.956839  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:29.956850  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:29.981845  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:29.981878  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:30.037712  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:30.037751  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:30.096286  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:30.096320  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:30.111120  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:30.111160  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:30.180653  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:30.171653    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:30.172384    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:30.174167    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:30.174765    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:30.176527    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:30.171653    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:30.172384    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:30.174167    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:30.174765    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:30.176527    4608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1212 01:37:31.535946  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:33.536464  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:32.681768  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:32.693283  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:32.693354  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:32.720606  291455 cri.go:89] found id: ""
	I1212 01:37:32.720629  291455 logs.go:282] 0 containers: []
	W1212 01:37:32.720638  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:32.720644  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:32.720703  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:32.747145  291455 cri.go:89] found id: ""
	I1212 01:37:32.747167  291455 logs.go:282] 0 containers: []
	W1212 01:37:32.747177  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:32.747185  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:32.747243  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:32.772037  291455 cri.go:89] found id: ""
	I1212 01:37:32.772061  291455 logs.go:282] 0 containers: []
	W1212 01:37:32.772070  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:32.772076  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:32.772134  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:32.862885  291455 cri.go:89] found id: ""
	I1212 01:37:32.862910  291455 logs.go:282] 0 containers: []
	W1212 01:37:32.862919  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:32.862925  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:32.862983  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:32.888016  291455 cri.go:89] found id: ""
	I1212 01:37:32.888038  291455 logs.go:282] 0 containers: []
	W1212 01:37:32.888049  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:32.888055  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:32.888115  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:32.912450  291455 cri.go:89] found id: ""
	I1212 01:37:32.912472  291455 logs.go:282] 0 containers: []
	W1212 01:37:32.912481  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:32.912487  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:32.912544  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:32.935759  291455 cri.go:89] found id: ""
	I1212 01:37:32.935781  291455 logs.go:282] 0 containers: []
	W1212 01:37:32.935790  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:32.935797  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:32.935855  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:32.963827  291455 cri.go:89] found id: ""
	I1212 01:37:32.963850  291455 logs.go:282] 0 containers: []
	W1212 01:37:32.963858  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:32.963869  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:32.963880  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:32.988758  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:32.988788  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:33.021942  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:33.021973  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:33.078907  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:33.078940  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:33.094242  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:33.094270  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:33.157981  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:33.149433    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:33.150328    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:33.151907    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:33.152360    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:33.153844    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:33.149433    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:33.150328    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:33.151907    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:33.152360    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:33.153844    4719 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:35.659737  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:35.672022  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:35.672098  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:35.701308  291455 cri.go:89] found id: ""
	I1212 01:37:35.701334  291455 logs.go:282] 0 containers: []
	W1212 01:37:35.701343  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:35.701349  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:35.701408  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:35.726385  291455 cri.go:89] found id: ""
	I1212 01:37:35.726409  291455 logs.go:282] 0 containers: []
	W1212 01:37:35.726418  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:35.726424  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:35.726482  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:35.751557  291455 cri.go:89] found id: ""
	I1212 01:37:35.751593  291455 logs.go:282] 0 containers: []
	W1212 01:37:35.751604  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:35.751610  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:35.751679  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:35.776892  291455 cri.go:89] found id: ""
	I1212 01:37:35.776956  291455 logs.go:282] 0 containers: []
	W1212 01:37:35.776971  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:35.776982  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:35.777044  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:35.824076  291455 cri.go:89] found id: ""
	I1212 01:37:35.824107  291455 logs.go:282] 0 containers: []
	W1212 01:37:35.824116  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:35.824122  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:35.824179  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:35.880084  291455 cri.go:89] found id: ""
	I1212 01:37:35.880107  291455 logs.go:282] 0 containers: []
	W1212 01:37:35.880115  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:35.880122  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:35.880192  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:35.907066  291455 cri.go:89] found id: ""
	I1212 01:37:35.907091  291455 logs.go:282] 0 containers: []
	W1212 01:37:35.907099  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:35.907105  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:35.907166  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:35.936636  291455 cri.go:89] found id: ""
	I1212 01:37:35.936713  291455 logs.go:282] 0 containers: []
	W1212 01:37:35.936729  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:35.936739  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:35.936750  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:35.993085  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:35.993119  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:36.007767  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:36.007856  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:36.076959  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:36.068314    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:36.068888    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:36.070632    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:36.071390    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:36.072929    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:36.068314    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:36.068888    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:36.070632    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:36.071390    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:36.072929    4818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:36.076984  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:36.076997  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:36.103429  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:36.103463  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:37:36.036277  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:38.536154  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:38.632890  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:38.643831  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:38.643909  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:38.671085  291455 cri.go:89] found id: ""
	I1212 01:37:38.671108  291455 logs.go:282] 0 containers: []
	W1212 01:37:38.671116  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:38.671122  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:38.671182  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:38.694933  291455 cri.go:89] found id: ""
	I1212 01:37:38.694958  291455 logs.go:282] 0 containers: []
	W1212 01:37:38.694966  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:38.694972  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:38.695070  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:38.723033  291455 cri.go:89] found id: ""
	I1212 01:37:38.723060  291455 logs.go:282] 0 containers: []
	W1212 01:37:38.723069  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:38.723075  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:38.723135  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:38.748068  291455 cri.go:89] found id: ""
	I1212 01:37:38.748093  291455 logs.go:282] 0 containers: []
	W1212 01:37:38.748102  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:38.748109  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:38.748169  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:38.778336  291455 cri.go:89] found id: ""
	I1212 01:37:38.778362  291455 logs.go:282] 0 containers: []
	W1212 01:37:38.778371  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:38.778377  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:38.778438  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:38.824425  291455 cri.go:89] found id: ""
	I1212 01:37:38.824452  291455 logs.go:282] 0 containers: []
	W1212 01:37:38.824461  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:38.824468  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:38.824526  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:38.869581  291455 cri.go:89] found id: ""
	I1212 01:37:38.869607  291455 logs.go:282] 0 containers: []
	W1212 01:37:38.869616  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:38.869623  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:38.869684  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:38.898375  291455 cri.go:89] found id: ""
	I1212 01:37:38.898401  291455 logs.go:282] 0 containers: []
	W1212 01:37:38.898411  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:38.898420  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:38.898431  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:38.924559  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:38.924594  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:38.954848  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:38.954884  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:39.010528  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:39.010564  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:39.024383  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:39.024412  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:39.090716  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:39.082311    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:39.082890    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:39.084642    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:39.085084    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:39.086585    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:39.082311    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:39.082890    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:39.084642    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:39.085084    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:39.086585    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1212 01:37:40.536718  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:43.036535  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:45.036776  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:41.591539  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:41.602064  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:41.602135  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:41.626512  291455 cri.go:89] found id: ""
	I1212 01:37:41.626584  291455 logs.go:282] 0 containers: []
	W1212 01:37:41.626609  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:41.626629  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:41.626713  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:41.651218  291455 cri.go:89] found id: ""
	I1212 01:37:41.651294  291455 logs.go:282] 0 containers: []
	W1212 01:37:41.651317  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:41.651339  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:41.651429  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:41.676032  291455 cri.go:89] found id: ""
	I1212 01:37:41.676055  291455 logs.go:282] 0 containers: []
	W1212 01:37:41.676064  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:41.676070  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:41.676144  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:41.700472  291455 cri.go:89] found id: ""
	I1212 01:37:41.700495  291455 logs.go:282] 0 containers: []
	W1212 01:37:41.700509  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:41.700516  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:41.700573  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:41.728292  291455 cri.go:89] found id: ""
	I1212 01:37:41.728317  291455 logs.go:282] 0 containers: []
	W1212 01:37:41.728326  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:41.728332  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:41.728413  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:41.752458  291455 cri.go:89] found id: ""
	I1212 01:37:41.752496  291455 logs.go:282] 0 containers: []
	W1212 01:37:41.752508  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:41.752515  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:41.752687  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:41.778677  291455 cri.go:89] found id: ""
	I1212 01:37:41.778703  291455 logs.go:282] 0 containers: []
	W1212 01:37:41.778711  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:41.778717  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:41.778802  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:41.831103  291455 cri.go:89] found id: ""
	I1212 01:37:41.831129  291455 logs.go:282] 0 containers: []
	W1212 01:37:41.831138  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:41.831147  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:41.831158  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:41.922931  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:41.914201    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:41.914946    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:41.916560    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:41.917145    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:41.918787    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:41.914201    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:41.914946    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:41.916560    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:41.917145    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:41.918787    5039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:41.922954  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:41.922966  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:41.948574  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:41.948606  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:41.976883  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:41.976910  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:42.031740  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:42.031774  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:44.547156  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:44.557779  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:44.557852  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:44.585516  291455 cri.go:89] found id: ""
	I1212 01:37:44.585539  291455 logs.go:282] 0 containers: []
	W1212 01:37:44.585547  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:44.585554  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:44.585614  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:44.610080  291455 cri.go:89] found id: ""
	I1212 01:37:44.610146  291455 logs.go:282] 0 containers: []
	W1212 01:37:44.610170  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:44.610188  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:44.610282  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:44.634333  291455 cri.go:89] found id: ""
	I1212 01:37:44.634403  291455 logs.go:282] 0 containers: []
	W1212 01:37:44.634428  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:44.634449  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:44.634538  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:44.659415  291455 cri.go:89] found id: ""
	I1212 01:37:44.659441  291455 logs.go:282] 0 containers: []
	W1212 01:37:44.659450  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:44.659457  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:44.659518  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:44.688713  291455 cri.go:89] found id: ""
	I1212 01:37:44.688738  291455 logs.go:282] 0 containers: []
	W1212 01:37:44.688747  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:44.688753  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:44.688813  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:44.713219  291455 cri.go:89] found id: ""
	I1212 01:37:44.713245  291455 logs.go:282] 0 containers: []
	W1212 01:37:44.713262  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:44.713270  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:44.713334  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:44.736447  291455 cri.go:89] found id: ""
	I1212 01:37:44.736472  291455 logs.go:282] 0 containers: []
	W1212 01:37:44.736480  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:44.736486  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:44.736562  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:44.762258  291455 cri.go:89] found id: ""
	I1212 01:37:44.762283  291455 logs.go:282] 0 containers: []
	W1212 01:37:44.762292  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:44.762324  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:44.762341  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:44.839027  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:44.839065  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:44.856616  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:44.856643  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:44.936247  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:44.928242    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:44.928784    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:44.930267    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:44.930803    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:44.932347    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:44.928242    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:44.928784    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:44.930267    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:44.930803    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:44.932347    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:44.936278  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:44.936291  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:44.961626  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:44.961659  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:37:47.536481  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:49.536708  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:47.490976  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:47.501776  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:47.501852  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:47.532240  291455 cri.go:89] found id: ""
	I1212 01:37:47.532263  291455 logs.go:282] 0 containers: []
	W1212 01:37:47.532271  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:47.532276  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:47.532336  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:47.556453  291455 cri.go:89] found id: ""
	I1212 01:37:47.556475  291455 logs.go:282] 0 containers: []
	W1212 01:37:47.556484  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:47.556490  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:47.556551  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:47.580605  291455 cri.go:89] found id: ""
	I1212 01:37:47.580628  291455 logs.go:282] 0 containers: []
	W1212 01:37:47.580637  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:47.580643  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:47.580709  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:47.605106  291455 cri.go:89] found id: ""
	I1212 01:37:47.605130  291455 logs.go:282] 0 containers: []
	W1212 01:37:47.605139  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:47.605145  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:47.605224  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:47.630587  291455 cri.go:89] found id: ""
	I1212 01:37:47.630613  291455 logs.go:282] 0 containers: []
	W1212 01:37:47.630622  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:47.630629  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:47.630733  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:47.656391  291455 cri.go:89] found id: ""
	I1212 01:37:47.656416  291455 logs.go:282] 0 containers: []
	W1212 01:37:47.656424  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:47.656431  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:47.656489  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:47.680787  291455 cri.go:89] found id: ""
	I1212 01:37:47.680817  291455 logs.go:282] 0 containers: []
	W1212 01:37:47.680826  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:47.680832  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:47.680913  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:47.706371  291455 cri.go:89] found id: ""
	I1212 01:37:47.706396  291455 logs.go:282] 0 containers: []
	W1212 01:37:47.706405  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:47.706414  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:47.706458  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:47.763648  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:47.763687  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:47.777355  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:47.777383  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:47.899204  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:47.891161    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:47.891855    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:47.893228    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:47.893728    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:47.895403    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:47.891161    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:47.891855    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:47.893228    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:47.893728    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:47.895403    5266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:47.899226  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:47.899238  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:47.924220  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:47.924256  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:50.458301  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:50.468856  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:50.468926  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:50.493349  291455 cri.go:89] found id: ""
	I1212 01:37:50.493374  291455 logs.go:282] 0 containers: []
	W1212 01:37:50.493382  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:50.493388  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:50.493445  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:50.517926  291455 cri.go:89] found id: ""
	I1212 01:37:50.517951  291455 logs.go:282] 0 containers: []
	W1212 01:37:50.517960  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:50.517966  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:50.518026  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:50.546779  291455 cri.go:89] found id: ""
	I1212 01:37:50.546805  291455 logs.go:282] 0 containers: []
	W1212 01:37:50.546814  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:50.546819  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:50.546877  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:50.572059  291455 cri.go:89] found id: ""
	I1212 01:37:50.572086  291455 logs.go:282] 0 containers: []
	W1212 01:37:50.572102  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:50.572110  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:50.572173  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:50.596562  291455 cri.go:89] found id: ""
	I1212 01:37:50.596585  291455 logs.go:282] 0 containers: []
	W1212 01:37:50.596594  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:50.596601  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:50.596669  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:50.621102  291455 cri.go:89] found id: ""
	I1212 01:37:50.621124  291455 logs.go:282] 0 containers: []
	W1212 01:37:50.621132  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:50.621138  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:50.621196  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:50.645424  291455 cri.go:89] found id: ""
	I1212 01:37:50.645445  291455 logs.go:282] 0 containers: []
	W1212 01:37:50.645454  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:50.645461  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:50.645521  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:50.670456  291455 cri.go:89] found id: ""
	I1212 01:37:50.670479  291455 logs.go:282] 0 containers: []
	W1212 01:37:50.670487  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:50.670497  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:50.670508  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:50.726487  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:50.726519  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:50.740149  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:50.740178  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:50.846147  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:50.836239    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:50.837070    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:50.839024    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:50.839387    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:50.840598    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:50.836239    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:50.837070    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:50.839024    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:50.839387    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:50.840598    5382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:50.846174  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:50.846188  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:50.882509  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:50.882583  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:37:52.036566  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:54.036621  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:53.411213  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:53.421355  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:53.421422  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:53.444104  291455 cri.go:89] found id: ""
	I1212 01:37:53.444130  291455 logs.go:282] 0 containers: []
	W1212 01:37:53.444139  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:53.444146  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:53.444205  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:53.467938  291455 cri.go:89] found id: ""
	I1212 01:37:53.467963  291455 logs.go:282] 0 containers: []
	W1212 01:37:53.467972  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:53.467979  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:53.468038  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:53.492082  291455 cri.go:89] found id: ""
	I1212 01:37:53.492106  291455 logs.go:282] 0 containers: []
	W1212 01:37:53.492115  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:53.492122  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:53.492180  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:53.516011  291455 cri.go:89] found id: ""
	I1212 01:37:53.516040  291455 logs.go:282] 0 containers: []
	W1212 01:37:53.516049  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:53.516056  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:53.516115  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:53.543513  291455 cri.go:89] found id: ""
	I1212 01:37:53.543550  291455 logs.go:282] 0 containers: []
	W1212 01:37:53.543559  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:53.543565  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:53.543707  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:53.568681  291455 cri.go:89] found id: ""
	I1212 01:37:53.568705  291455 logs.go:282] 0 containers: []
	W1212 01:37:53.568713  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:53.568720  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:53.568797  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:53.593562  291455 cri.go:89] found id: ""
	I1212 01:37:53.593587  291455 logs.go:282] 0 containers: []
	W1212 01:37:53.593596  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:53.593602  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:53.593676  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:53.617634  291455 cri.go:89] found id: ""
	I1212 01:37:53.617658  291455 logs.go:282] 0 containers: []
	W1212 01:37:53.617667  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:53.617677  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:53.617691  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:53.672956  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:53.672991  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:53.686739  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:53.686767  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:53.753435  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:53.745274    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:53.746109    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:53.747777    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:53.748302    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:53.749767    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:53.745274    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:53.746109    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:53.747777    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:53.748302    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:53.749767    5493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:53.753456  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:53.753470  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:53.785303  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:53.785347  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:37:56.536427  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:37:59.036479  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:37:56.343327  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:56.353619  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:56.353686  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:56.377008  291455 cri.go:89] found id: ""
	I1212 01:37:56.377032  291455 logs.go:282] 0 containers: []
	W1212 01:37:56.377040  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:56.377047  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:56.377103  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:56.403572  291455 cri.go:89] found id: ""
	I1212 01:37:56.403599  291455 logs.go:282] 0 containers: []
	W1212 01:37:56.403607  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:56.403614  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:56.403677  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:56.427234  291455 cri.go:89] found id: ""
	I1212 01:37:56.427256  291455 logs.go:282] 0 containers: []
	W1212 01:37:56.427266  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:56.427272  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:56.427329  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:56.450300  291455 cri.go:89] found id: ""
	I1212 01:37:56.450325  291455 logs.go:282] 0 containers: []
	W1212 01:37:56.450334  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:56.450340  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:56.450399  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:56.478269  291455 cri.go:89] found id: ""
	I1212 01:37:56.478293  291455 logs.go:282] 0 containers: []
	W1212 01:37:56.478302  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:56.478308  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:56.478402  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:56.502839  291455 cri.go:89] found id: ""
	I1212 01:37:56.502863  291455 logs.go:282] 0 containers: []
	W1212 01:37:56.502872  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:56.502879  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:56.502939  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:56.528770  291455 cri.go:89] found id: ""
	I1212 01:37:56.528796  291455 logs.go:282] 0 containers: []
	W1212 01:37:56.528804  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:56.528810  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:56.528886  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:56.552625  291455 cri.go:89] found id: ""
	I1212 01:37:56.552687  291455 logs.go:282] 0 containers: []
	W1212 01:37:56.552701  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:56.552710  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:56.552722  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:56.582901  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:56.582929  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:56.638758  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:56.638790  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:56.652337  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:56.652364  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:56.718815  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:56.710468    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:56.711245    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:56.712862    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:56.713372    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:56.714933    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:56.710468    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:56.711245    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:56.712862    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:56.713372    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:56.714933    5619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:56.718853  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:56.718866  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:37:59.245105  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:37:59.255232  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:37:59.255300  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:37:59.280996  291455 cri.go:89] found id: ""
	I1212 01:37:59.281018  291455 logs.go:282] 0 containers: []
	W1212 01:37:59.281027  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:37:59.281033  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:37:59.281089  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:37:59.306870  291455 cri.go:89] found id: ""
	I1212 01:37:59.306893  291455 logs.go:282] 0 containers: []
	W1212 01:37:59.306901  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:37:59.306908  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:37:59.306967  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:37:59.332982  291455 cri.go:89] found id: ""
	I1212 01:37:59.333008  291455 logs.go:282] 0 containers: []
	W1212 01:37:59.333017  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:37:59.333022  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:37:59.333128  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:37:59.360799  291455 cri.go:89] found id: ""
	I1212 01:37:59.360824  291455 logs.go:282] 0 containers: []
	W1212 01:37:59.360833  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:37:59.360839  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:37:59.360897  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:37:59.383773  291455 cri.go:89] found id: ""
	I1212 01:37:59.383836  291455 logs.go:282] 0 containers: []
	W1212 01:37:59.383851  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:37:59.383858  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:37:59.383916  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:37:59.411933  291455 cri.go:89] found id: ""
	I1212 01:37:59.411958  291455 logs.go:282] 0 containers: []
	W1212 01:37:59.411966  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:37:59.411973  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:37:59.412073  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:37:59.437061  291455 cri.go:89] found id: ""
	I1212 01:37:59.437087  291455 logs.go:282] 0 containers: []
	W1212 01:37:59.437095  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:37:59.437102  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:37:59.437182  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:37:59.461853  291455 cri.go:89] found id: ""
	I1212 01:37:59.461877  291455 logs.go:282] 0 containers: []
	W1212 01:37:59.461886  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:37:59.461895  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:37:59.461907  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:37:59.493084  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:37:59.493111  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:37:59.549198  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:37:59.549229  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:37:59.562644  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:37:59.562674  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:37:59.627349  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:37:59.619195    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:59.619835    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:59.621508    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:59.622053    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:59.623671    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:37:59.619195    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:59.619835    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:59.621508    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:59.622053    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:37:59.623671    5732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:37:59.627373  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:37:59.627388  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1212 01:38:01.535866  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:03.536428  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:38:02.153040  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:02.163386  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:02.163465  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:02.188022  291455 cri.go:89] found id: ""
	I1212 01:38:02.188050  291455 logs.go:282] 0 containers: []
	W1212 01:38:02.188058  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:02.188064  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:02.188126  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:02.212051  291455 cri.go:89] found id: ""
	I1212 01:38:02.212088  291455 logs.go:282] 0 containers: []
	W1212 01:38:02.212097  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:02.212104  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:02.212163  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:02.236784  291455 cri.go:89] found id: ""
	I1212 01:38:02.236815  291455 logs.go:282] 0 containers: []
	W1212 01:38:02.236824  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:02.236831  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:02.236895  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:02.262277  291455 cri.go:89] found id: ""
	I1212 01:38:02.262301  291455 logs.go:282] 0 containers: []
	W1212 01:38:02.262310  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:02.262316  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:02.262375  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:02.286641  291455 cri.go:89] found id: ""
	I1212 01:38:02.286665  291455 logs.go:282] 0 containers: []
	W1212 01:38:02.286674  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:02.286680  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:02.286739  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:02.315696  291455 cri.go:89] found id: ""
	I1212 01:38:02.315721  291455 logs.go:282] 0 containers: []
	W1212 01:38:02.315729  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:02.315736  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:02.315796  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:02.341469  291455 cri.go:89] found id: ""
	I1212 01:38:02.341495  291455 logs.go:282] 0 containers: []
	W1212 01:38:02.341504  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:02.341511  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:02.341578  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:02.375601  291455 cri.go:89] found id: ""
	I1212 01:38:02.375626  291455 logs.go:282] 0 containers: []
	W1212 01:38:02.375634  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:02.375644  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:02.375656  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:02.388949  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:02.388978  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:02.458902  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:02.448758    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:02.449311    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:02.452630    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:02.453261    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:02.454829    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:02.448758    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:02.449311    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:02.452630    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:02.453261    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:02.454829    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:02.458924  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:02.458936  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:02.485359  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:02.485393  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:02.512676  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:02.512746  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:05.069728  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:05.084872  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:05.084975  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:05.130414  291455 cri.go:89] found id: ""
	I1212 01:38:05.130441  291455 logs.go:282] 0 containers: []
	W1212 01:38:05.130450  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:05.130457  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:05.130524  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:05.156129  291455 cri.go:89] found id: ""
	I1212 01:38:05.156154  291455 logs.go:282] 0 containers: []
	W1212 01:38:05.156163  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:05.156169  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:05.156230  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:05.182033  291455 cri.go:89] found id: ""
	I1212 01:38:05.182056  291455 logs.go:282] 0 containers: []
	W1212 01:38:05.182065  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:05.182071  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:05.182131  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:05.206795  291455 cri.go:89] found id: ""
	I1212 01:38:05.206821  291455 logs.go:282] 0 containers: []
	W1212 01:38:05.206830  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:05.206842  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:05.206903  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:05.231972  291455 cri.go:89] found id: ""
	I1212 01:38:05.231998  291455 logs.go:282] 0 containers: []
	W1212 01:38:05.232008  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:05.232014  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:05.232075  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:05.257476  291455 cri.go:89] found id: ""
	I1212 01:38:05.257501  291455 logs.go:282] 0 containers: []
	W1212 01:38:05.257509  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:05.257515  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:05.257576  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:05.282557  291455 cri.go:89] found id: ""
	I1212 01:38:05.282581  291455 logs.go:282] 0 containers: []
	W1212 01:38:05.282590  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:05.282595  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:05.282655  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:05.306866  291455 cri.go:89] found id: ""
	I1212 01:38:05.306891  291455 logs.go:282] 0 containers: []
	W1212 01:38:05.306899  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:05.306908  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:05.306919  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:05.363028  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:05.363073  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:05.376693  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:05.376722  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:05.445040  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:05.435873    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:05.436618    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:05.438470    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:05.439137    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:05.440737    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:05.435873    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:05.436618    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:05.438470    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:05.439137    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:05.440737    5952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:05.445059  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:05.445071  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:05.470893  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:05.470933  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:38:05.536804  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:08.035822  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:10.036632  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:38:08.000563  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:08.015628  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:08.015701  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:08.081620  291455 cri.go:89] found id: ""
	I1212 01:38:08.081643  291455 logs.go:282] 0 containers: []
	W1212 01:38:08.081652  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:08.081661  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:08.081736  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:08.129116  291455 cri.go:89] found id: ""
	I1212 01:38:08.129137  291455 logs.go:282] 0 containers: []
	W1212 01:38:08.129146  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:08.129152  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:08.129208  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:08.154760  291455 cri.go:89] found id: ""
	I1212 01:38:08.154781  291455 logs.go:282] 0 containers: []
	W1212 01:38:08.154790  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:08.154797  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:08.154853  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:08.181948  291455 cri.go:89] found id: ""
	I1212 01:38:08.181971  291455 logs.go:282] 0 containers: []
	W1212 01:38:08.181981  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:08.181988  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:08.182052  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:08.206310  291455 cri.go:89] found id: ""
	I1212 01:38:08.206335  291455 logs.go:282] 0 containers: []
	W1212 01:38:08.206345  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:08.206351  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:08.206413  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:08.230579  291455 cri.go:89] found id: ""
	I1212 01:38:08.230606  291455 logs.go:282] 0 containers: []
	W1212 01:38:08.230615  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:08.230624  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:08.230690  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:08.259888  291455 cri.go:89] found id: ""
	I1212 01:38:08.259913  291455 logs.go:282] 0 containers: []
	W1212 01:38:08.259922  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:08.259928  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:08.260006  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:08.284903  291455 cri.go:89] found id: ""
	I1212 01:38:08.284927  291455 logs.go:282] 0 containers: []
	W1212 01:38:08.284936  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:08.284945  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:08.284957  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:08.341529  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:08.341565  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:08.355353  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:08.355394  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:08.418766  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:08.409488    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:08.410375    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:08.412414    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:08.413281    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:08.414948    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:08.409488    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:08.410375    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:08.412414    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:08.413281    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:08.414948    6067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:08.418789  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:08.418801  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:08.444616  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:08.444654  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:10.972656  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:10.983126  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:10.983206  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:11.011272  291455 cri.go:89] found id: ""
	I1212 01:38:11.011296  291455 logs.go:282] 0 containers: []
	W1212 01:38:11.011305  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:11.011311  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:11.011372  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:11.061173  291455 cri.go:89] found id: ""
	I1212 01:38:11.061199  291455 logs.go:282] 0 containers: []
	W1212 01:38:11.061208  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:11.061214  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:11.061273  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:11.124035  291455 cri.go:89] found id: ""
	I1212 01:38:11.124061  291455 logs.go:282] 0 containers: []
	W1212 01:38:11.124070  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:11.124077  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:11.124144  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:11.152861  291455 cri.go:89] found id: ""
	I1212 01:38:11.152900  291455 logs.go:282] 0 containers: []
	W1212 01:38:11.152910  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:11.152932  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:11.153005  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:11.178248  291455 cri.go:89] found id: ""
	I1212 01:38:11.178270  291455 logs.go:282] 0 containers: []
	W1212 01:38:11.178279  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:11.178285  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:11.178355  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:11.213235  291455 cri.go:89] found id: ""
	I1212 01:38:11.213260  291455 logs.go:282] 0 containers: []
	W1212 01:38:11.213269  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:11.213275  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:11.213337  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:11.238933  291455 cri.go:89] found id: ""
	I1212 01:38:11.238960  291455 logs.go:282] 0 containers: []
	W1212 01:38:11.238969  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:11.238975  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:11.239060  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:11.264115  291455 cri.go:89] found id: ""
	I1212 01:38:11.264137  291455 logs.go:282] 0 containers: []
	W1212 01:38:11.264146  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:11.264155  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:11.264167  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:11.320523  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:11.320561  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:11.334027  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:11.334059  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:12.036672  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:14.536663  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:11.411780  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:11.403056    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:11.403575    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:11.405319    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:11.405839    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:11.407505    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:11.403056    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:11.403575    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:11.405319    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:11.405839    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:11.407505    6181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:11.411803  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:11.411815  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:11.437459  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:11.437498  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:13.966371  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:13.976737  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:13.976807  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:14.002889  291455 cri.go:89] found id: ""
	I1212 01:38:14.002926  291455 logs.go:282] 0 containers: []
	W1212 01:38:14.002936  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:14.002943  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:14.003051  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:14.028607  291455 cri.go:89] found id: ""
	I1212 01:38:14.028632  291455 logs.go:282] 0 containers: []
	W1212 01:38:14.028640  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:14.028647  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:14.028707  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:14.068137  291455 cri.go:89] found id: ""
	I1212 01:38:14.068159  291455 logs.go:282] 0 containers: []
	W1212 01:38:14.068168  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:14.068174  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:14.068236  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:14.114047  291455 cri.go:89] found id: ""
	I1212 01:38:14.114068  291455 logs.go:282] 0 containers: []
	W1212 01:38:14.114077  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:14.114083  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:14.114142  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:14.143724  291455 cri.go:89] found id: ""
	I1212 01:38:14.143751  291455 logs.go:282] 0 containers: []
	W1212 01:38:14.143760  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:14.143766  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:14.143837  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:14.172821  291455 cri.go:89] found id: ""
	I1212 01:38:14.172844  291455 logs.go:282] 0 containers: []
	W1212 01:38:14.172853  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:14.172860  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:14.172922  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:14.201404  291455 cri.go:89] found id: ""
	I1212 01:38:14.201428  291455 logs.go:282] 0 containers: []
	W1212 01:38:14.201437  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:14.201443  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:14.201502  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:14.225421  291455 cri.go:89] found id: ""
	I1212 01:38:14.225445  291455 logs.go:282] 0 containers: []
	W1212 01:38:14.225454  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:14.225464  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:14.225475  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:14.281620  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:14.281655  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:14.295270  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:14.295297  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:14.361558  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:14.353174    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:14.353959    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:14.355541    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:14.356054    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:14.357617    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:14.353174    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:14.353959    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:14.355541    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:14.356054    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:14.357617    6294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:14.361580  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:14.361594  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:14.387622  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:14.387657  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:38:17.036493  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:19.535924  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:38:16.917930  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:16.928677  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:16.928747  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:16.956782  291455 cri.go:89] found id: ""
	I1212 01:38:16.956805  291455 logs.go:282] 0 containers: []
	W1212 01:38:16.956815  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:16.956821  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:16.956882  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:16.982223  291455 cri.go:89] found id: ""
	I1212 01:38:16.982255  291455 logs.go:282] 0 containers: []
	W1212 01:38:16.982264  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:16.982270  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:16.982337  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:17.011072  291455 cri.go:89] found id: ""
	I1212 01:38:17.011097  291455 logs.go:282] 0 containers: []
	W1212 01:38:17.011107  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:17.011114  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:17.011191  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:17.052070  291455 cri.go:89] found id: ""
	I1212 01:38:17.052096  291455 logs.go:282] 0 containers: []
	W1212 01:38:17.052104  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:17.052110  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:17.052177  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:17.084107  291455 cri.go:89] found id: ""
	I1212 01:38:17.084141  291455 logs.go:282] 0 containers: []
	W1212 01:38:17.084151  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:17.084157  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:17.084224  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:17.122692  291455 cri.go:89] found id: ""
	I1212 01:38:17.122766  291455 logs.go:282] 0 containers: []
	W1212 01:38:17.122797  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:17.122817  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:17.122923  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:17.156006  291455 cri.go:89] found id: ""
	I1212 01:38:17.156081  291455 logs.go:282] 0 containers: []
	W1212 01:38:17.156109  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:17.156129  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:17.156241  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:17.182169  291455 cri.go:89] found id: ""
	I1212 01:38:17.182240  291455 logs.go:282] 0 containers: []
	W1212 01:38:17.182264  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:17.182285  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:17.182335  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:17.237895  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:17.237933  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:17.252584  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:17.252654  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:17.321480  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:17.312815    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:17.313531    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:17.315204    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:17.315765    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:17.317270    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:17.312815    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:17.313531    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:17.315204    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:17.315765    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:17.317270    6409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:17.321502  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:17.321515  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:17.347596  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:17.347629  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:19.879967  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:19.890396  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:19.890464  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:19.918925  291455 cri.go:89] found id: ""
	I1212 01:38:19.918949  291455 logs.go:282] 0 containers: []
	W1212 01:38:19.918958  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:19.918964  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:19.919053  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:19.943584  291455 cri.go:89] found id: ""
	I1212 01:38:19.943610  291455 logs.go:282] 0 containers: []
	W1212 01:38:19.943619  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:19.943626  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:19.943681  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:19.969048  291455 cri.go:89] found id: ""
	I1212 01:38:19.969068  291455 logs.go:282] 0 containers: []
	W1212 01:38:19.969077  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:19.969083  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:19.969144  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:20.003773  291455 cri.go:89] found id: ""
	I1212 01:38:20.003795  291455 logs.go:282] 0 containers: []
	W1212 01:38:20.003804  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:20.003821  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:20.003894  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:20.066569  291455 cri.go:89] found id: ""
	I1212 01:38:20.066593  291455 logs.go:282] 0 containers: []
	W1212 01:38:20.066602  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:20.066608  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:20.066672  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:20.123787  291455 cri.go:89] found id: ""
	I1212 01:38:20.123818  291455 logs.go:282] 0 containers: []
	W1212 01:38:20.123828  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:20.123835  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:20.123902  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:20.148942  291455 cri.go:89] found id: ""
	I1212 01:38:20.148967  291455 logs.go:282] 0 containers: []
	W1212 01:38:20.148976  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:20.148982  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:20.149040  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:20.174974  291455 cri.go:89] found id: ""
	I1212 01:38:20.175019  291455 logs.go:282] 0 containers: []
	W1212 01:38:20.175028  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:20.175037  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:20.175049  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:20.188705  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:20.188734  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:20.257975  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:20.247998    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:20.248900    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:20.250615    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:20.251381    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:20.253188    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:20.247998    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:20.248900    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:20.250615    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:20.251381    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:20.253188    6521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:20.258004  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:20.258018  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:20.283558  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:20.283589  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:20.313552  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:20.313580  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1212 01:38:21.535995  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:23.536531  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:38:22.869782  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:22.880016  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:22.880091  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:22.903866  291455 cri.go:89] found id: ""
	I1212 01:38:22.903891  291455 logs.go:282] 0 containers: []
	W1212 01:38:22.903901  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:22.903908  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:22.903971  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:22.927721  291455 cri.go:89] found id: ""
	I1212 01:38:22.927744  291455 logs.go:282] 0 containers: []
	W1212 01:38:22.927752  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:22.927759  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:22.927816  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:22.952423  291455 cri.go:89] found id: ""
	I1212 01:38:22.952447  291455 logs.go:282] 0 containers: []
	W1212 01:38:22.952455  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:22.952461  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:22.952517  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:22.976598  291455 cri.go:89] found id: ""
	I1212 01:38:22.976620  291455 logs.go:282] 0 containers: []
	W1212 01:38:22.976628  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:22.976634  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:22.976691  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:23.003885  291455 cri.go:89] found id: ""
	I1212 01:38:23.003919  291455 logs.go:282] 0 containers: []
	W1212 01:38:23.003939  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:23.003947  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:23.004046  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:23.033013  291455 cri.go:89] found id: ""
	I1212 01:38:23.033036  291455 logs.go:282] 0 containers: []
	W1212 01:38:23.033045  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:23.033052  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:23.033112  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:23.092706  291455 cri.go:89] found id: ""
	I1212 01:38:23.092730  291455 logs.go:282] 0 containers: []
	W1212 01:38:23.092739  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:23.092745  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:23.092802  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:23.133640  291455 cri.go:89] found id: ""
	I1212 01:38:23.133668  291455 logs.go:282] 0 containers: []
	W1212 01:38:23.133676  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:23.133686  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:23.133697  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:23.196413  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:23.196452  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:23.209608  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:23.209634  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:23.275524  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:23.267738    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:23.268351    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:23.269907    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:23.270261    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:23.271739    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:23.267738    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:23.268351    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:23.269907    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:23.270261    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:23.271739    6636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:23.275547  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:23.275559  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:23.300618  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:23.300651  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:25.829093  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:25.839308  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:25.839392  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:25.862901  291455 cri.go:89] found id: ""
	I1212 01:38:25.862927  291455 logs.go:282] 0 containers: []
	W1212 01:38:25.862936  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:25.862942  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:25.863050  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:25.886878  291455 cri.go:89] found id: ""
	I1212 01:38:25.886912  291455 logs.go:282] 0 containers: []
	W1212 01:38:25.886921  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:25.886927  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:25.887012  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:25.912760  291455 cri.go:89] found id: ""
	I1212 01:38:25.912782  291455 logs.go:282] 0 containers: []
	W1212 01:38:25.912791  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:25.912799  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:25.912867  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:25.937385  291455 cri.go:89] found id: ""
	I1212 01:38:25.937409  291455 logs.go:282] 0 containers: []
	W1212 01:38:25.937418  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:25.937424  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:25.937482  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:25.961635  291455 cri.go:89] found id: ""
	I1212 01:38:25.961659  291455 logs.go:282] 0 containers: []
	W1212 01:38:25.961668  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:25.961674  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:25.961736  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:25.984780  291455 cri.go:89] found id: ""
	I1212 01:38:25.984804  291455 logs.go:282] 0 containers: []
	W1212 01:38:25.984814  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:25.984821  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:25.984886  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:26.013891  291455 cri.go:89] found id: ""
	I1212 01:38:26.013918  291455 logs.go:282] 0 containers: []
	W1212 01:38:26.013927  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:26.013933  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:26.013995  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:26.058178  291455 cri.go:89] found id: ""
	I1212 01:38:26.058203  291455 logs.go:282] 0 containers: []
	W1212 01:38:26.058212  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:26.058222  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:26.058233  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:26.145226  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:26.145265  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:26.159401  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:26.159430  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:26.224696  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:26.216061    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:26.217085    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:26.217937    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:26.219401    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:26.219913    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:26.216061    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:26.217085    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:26.217937    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:26.219401    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:26.219913    6745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:26.224716  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:26.224727  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:26.249818  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:26.249853  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:38:25.536763  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:28.036701  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:30.036797  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:38:28.780686  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:28.791844  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:28.791927  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:28.820089  291455 cri.go:89] found id: ""
	I1212 01:38:28.820114  291455 logs.go:282] 0 containers: []
	W1212 01:38:28.820123  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:28.820129  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:28.820187  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:28.844073  291455 cri.go:89] found id: ""
	I1212 01:38:28.844097  291455 logs.go:282] 0 containers: []
	W1212 01:38:28.844106  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:28.844115  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:28.844173  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:28.874510  291455 cri.go:89] found id: ""
	I1212 01:38:28.874535  291455 logs.go:282] 0 containers: []
	W1212 01:38:28.874544  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:28.874550  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:28.874609  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:28.899593  291455 cri.go:89] found id: ""
	I1212 01:38:28.899667  291455 logs.go:282] 0 containers: []
	W1212 01:38:28.899683  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:28.899691  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:28.899749  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:28.923958  291455 cri.go:89] found id: ""
	I1212 01:38:28.923981  291455 logs.go:282] 0 containers: []
	W1212 01:38:28.923990  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:28.923996  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:28.924058  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:28.949188  291455 cri.go:89] found id: ""
	I1212 01:38:28.949217  291455 logs.go:282] 0 containers: []
	W1212 01:38:28.949225  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:28.949231  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:28.949307  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:28.974943  291455 cri.go:89] found id: ""
	I1212 01:38:28.974968  291455 logs.go:282] 0 containers: []
	W1212 01:38:28.974976  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:28.974982  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:28.975062  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:29.004380  291455 cri.go:89] found id: ""
	I1212 01:38:29.004475  291455 logs.go:282] 0 containers: []
	W1212 01:38:29.004501  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:29.004542  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:29.004572  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:29.021785  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:29.021856  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:29.143333  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:29.134378    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:29.134910    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:29.137306    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:29.137843    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:29.139511    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:29.134378    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:29.134910    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:29.137306    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:29.137843    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:29.139511    6848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:29.143354  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:29.143366  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:29.168668  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:29.168699  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:29.197133  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:29.197159  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1212 01:38:32.536552  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:35.039253  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:38:31.753888  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:31.765059  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:31.765150  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:31.790319  291455 cri.go:89] found id: ""
	I1212 01:38:31.790342  291455 logs.go:282] 0 containers: []
	W1212 01:38:31.790350  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:31.790357  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:31.790415  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:31.815400  291455 cri.go:89] found id: ""
	I1212 01:38:31.815424  291455 logs.go:282] 0 containers: []
	W1212 01:38:31.815434  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:31.815441  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:31.815502  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:31.840194  291455 cri.go:89] found id: ""
	I1212 01:38:31.840217  291455 logs.go:282] 0 containers: []
	W1212 01:38:31.840226  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:31.840231  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:31.840291  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:31.867911  291455 cri.go:89] found id: ""
	I1212 01:38:31.867935  291455 logs.go:282] 0 containers: []
	W1212 01:38:31.867943  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:31.867949  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:31.868008  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:31.892198  291455 cri.go:89] found id: ""
	I1212 01:38:31.892222  291455 logs.go:282] 0 containers: []
	W1212 01:38:31.892230  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:31.892238  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:31.892296  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:31.916890  291455 cri.go:89] found id: ""
	I1212 01:38:31.916914  291455 logs.go:282] 0 containers: []
	W1212 01:38:31.916923  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:31.916929  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:31.916988  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:31.942060  291455 cri.go:89] found id: ""
	I1212 01:38:31.942085  291455 logs.go:282] 0 containers: []
	W1212 01:38:31.942095  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:31.942102  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:31.942160  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:31.968817  291455 cri.go:89] found id: ""
	I1212 01:38:31.968839  291455 logs.go:282] 0 containers: []
	W1212 01:38:31.968848  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:31.968857  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:31.968871  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:31.997201  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:31.997227  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:32.062907  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:32.062945  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:32.079848  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:32.079874  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:32.172399  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:32.162924    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:32.163521    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:32.165105    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:32.165573    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:32.167197    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:32.162924    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:32.163521    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:32.165105    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:32.165573    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:32.167197    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:32.172421  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:32.172433  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:34.699204  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:34.710589  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:34.710660  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:34.734740  291455 cri.go:89] found id: ""
	I1212 01:38:34.734767  291455 logs.go:282] 0 containers: []
	W1212 01:38:34.734776  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:34.734782  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:34.734841  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:34.759636  291455 cri.go:89] found id: ""
	I1212 01:38:34.759659  291455 logs.go:282] 0 containers: []
	W1212 01:38:34.759667  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:34.759679  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:34.759739  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:34.785220  291455 cri.go:89] found id: ""
	I1212 01:38:34.785255  291455 logs.go:282] 0 containers: []
	W1212 01:38:34.785265  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:34.785271  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:34.785341  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:34.814480  291455 cri.go:89] found id: ""
	I1212 01:38:34.814502  291455 logs.go:282] 0 containers: []
	W1212 01:38:34.814510  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:34.814516  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:34.814580  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:34.840740  291455 cri.go:89] found id: ""
	I1212 01:38:34.840774  291455 logs.go:282] 0 containers: []
	W1212 01:38:34.840784  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:34.840790  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:34.840872  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:34.868875  291455 cri.go:89] found id: ""
	I1212 01:38:34.868898  291455 logs.go:282] 0 containers: []
	W1212 01:38:34.868907  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:34.868913  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:34.868973  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:34.897841  291455 cri.go:89] found id: ""
	I1212 01:38:34.897864  291455 logs.go:282] 0 containers: []
	W1212 01:38:34.897873  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:34.897879  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:34.897937  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:34.921846  291455 cri.go:89] found id: ""
	I1212 01:38:34.921869  291455 logs.go:282] 0 containers: []
	W1212 01:38:34.921877  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:34.921886  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:34.921897  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:34.935038  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:34.935066  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:35.007684  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:34.997327    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:34.997746    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:34.999039    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:34.999714    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:35.001615    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:34.997327    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:34.997746    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:34.999039    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:34.999714    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:35.001615    7080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:35.007755  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:35.007775  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:35.034750  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:35.034794  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:35.089747  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:35.089777  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1212 01:38:37.536673  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:39.543660  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:38:37.657148  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:37.668842  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:37.668917  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:37.696665  291455 cri.go:89] found id: ""
	I1212 01:38:37.696699  291455 logs.go:282] 0 containers: []
	W1212 01:38:37.696708  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:37.696720  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:37.696777  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:37.728956  291455 cri.go:89] found id: ""
	I1212 01:38:37.728979  291455 logs.go:282] 0 containers: []
	W1212 01:38:37.728987  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:37.728993  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:37.729058  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:37.753296  291455 cri.go:89] found id: ""
	I1212 01:38:37.753324  291455 logs.go:282] 0 containers: []
	W1212 01:38:37.753334  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:37.753340  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:37.753397  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:37.778445  291455 cri.go:89] found id: ""
	I1212 01:38:37.778471  291455 logs.go:282] 0 containers: []
	W1212 01:38:37.778481  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:37.778490  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:37.778548  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:37.807550  291455 cri.go:89] found id: ""
	I1212 01:38:37.807572  291455 logs.go:282] 0 containers: []
	W1212 01:38:37.807580  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:37.807587  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:37.807649  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:37.832292  291455 cri.go:89] found id: ""
	I1212 01:38:37.832315  291455 logs.go:282] 0 containers: []
	W1212 01:38:37.832323  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:37.832329  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:37.832386  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:37.856566  291455 cri.go:89] found id: ""
	I1212 01:38:37.856588  291455 logs.go:282] 0 containers: []
	W1212 01:38:37.856597  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:37.856602  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:37.856660  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:37.880677  291455 cri.go:89] found id: ""
	I1212 01:38:37.880741  291455 logs.go:282] 0 containers: []
	W1212 01:38:37.880766  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:37.880789  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:37.880820  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:37.910870  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:37.910908  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:37.938485  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:37.938520  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:37.993961  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:37.993995  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:38.010371  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:38.010404  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:38.096529  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:38.085475    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:38.086344    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:38.088104    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:38.088451    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:38.092325    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:38.085475    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:38.086344    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:38.088104    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:38.088451    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:38.092325    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:40.598418  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:40.609775  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:40.609847  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:40.635651  291455 cri.go:89] found id: ""
	I1212 01:38:40.635677  291455 logs.go:282] 0 containers: []
	W1212 01:38:40.635686  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:40.635693  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:40.635757  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:40.660863  291455 cri.go:89] found id: ""
	I1212 01:38:40.660889  291455 logs.go:282] 0 containers: []
	W1212 01:38:40.660898  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:40.660905  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:40.660966  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:40.685941  291455 cri.go:89] found id: ""
	I1212 01:38:40.686012  291455 logs.go:282] 0 containers: []
	W1212 01:38:40.686053  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:40.686078  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:40.686166  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:40.711525  291455 cri.go:89] found id: ""
	I1212 01:38:40.711554  291455 logs.go:282] 0 containers: []
	W1212 01:38:40.711563  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:40.711569  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:40.711630  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:40.737721  291455 cri.go:89] found id: ""
	I1212 01:38:40.737795  291455 logs.go:282] 0 containers: []
	W1212 01:38:40.737816  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:40.737836  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:40.737927  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:40.761337  291455 cri.go:89] found id: ""
	I1212 01:38:40.761402  291455 logs.go:282] 0 containers: []
	W1212 01:38:40.761424  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:40.761442  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:40.761525  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:40.786163  291455 cri.go:89] found id: ""
	I1212 01:38:40.786239  291455 logs.go:282] 0 containers: []
	W1212 01:38:40.786264  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:40.786285  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:40.786412  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:40.810546  291455 cri.go:89] found id: ""
	I1212 01:38:40.810610  291455 logs.go:282] 0 containers: []
	W1212 01:38:40.810634  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:40.810655  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:40.810694  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:40.866283  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:40.866320  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:40.879799  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:40.879834  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:40.945902  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:40.937611    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:40.938411    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:40.939975    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:40.940544    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:40.942091    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:40.937611    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:40.938411    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:40.939975    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:40.940544    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:40.942091    7307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:40.945925  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:40.945938  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:40.971267  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:40.971302  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:38:42.036561  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:44.536569  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:38:43.502022  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:43.513782  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:43.513855  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:43.538026  291455 cri.go:89] found id: ""
	I1212 01:38:43.538047  291455 logs.go:282] 0 containers: []
	W1212 01:38:43.538055  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:43.538060  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:43.538117  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:43.562296  291455 cri.go:89] found id: ""
	I1212 01:38:43.562320  291455 logs.go:282] 0 containers: []
	W1212 01:38:43.562329  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:43.562335  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:43.562399  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:43.585964  291455 cri.go:89] found id: ""
	I1212 01:38:43.585986  291455 logs.go:282] 0 containers: []
	W1212 01:38:43.585995  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:43.586001  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:43.586056  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:43.609636  291455 cri.go:89] found id: ""
	I1212 01:38:43.609658  291455 logs.go:282] 0 containers: []
	W1212 01:38:43.609666  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:43.609672  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:43.609729  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:43.634822  291455 cri.go:89] found id: ""
	I1212 01:38:43.634843  291455 logs.go:282] 0 containers: []
	W1212 01:38:43.634852  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:43.634857  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:43.634916  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:43.659517  291455 cri.go:89] found id: ""
	I1212 01:38:43.659539  291455 logs.go:282] 0 containers: []
	W1212 01:38:43.659553  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:43.659560  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:43.659619  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:43.684416  291455 cri.go:89] found id: ""
	I1212 01:38:43.684471  291455 logs.go:282] 0 containers: []
	W1212 01:38:43.684486  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:43.684493  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:43.684557  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:43.708909  291455 cri.go:89] found id: ""
	I1212 01:38:43.708931  291455 logs.go:282] 0 containers: []
	W1212 01:38:43.708939  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:43.708949  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:43.708961  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:43.764034  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:43.764069  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:43.778276  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:43.778304  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:43.849112  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:43.839330    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:43.839703    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:43.842808    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:43.843485    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:43.845319    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:43.839330    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:43.839703    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:43.842808    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:43.843485    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:43.845319    7421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:43.849132  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:43.849144  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:43.874790  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:43.874823  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:38:47.036537  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:49.536417  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:38:46.404666  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:46.415686  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:46.415772  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:46.446409  291455 cri.go:89] found id: ""
	I1212 01:38:46.446436  291455 logs.go:282] 0 containers: []
	W1212 01:38:46.446445  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:46.446452  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:46.446517  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:46.481137  291455 cri.go:89] found id: ""
	I1212 01:38:46.481160  291455 logs.go:282] 0 containers: []
	W1212 01:38:46.481169  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:46.481175  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:46.481258  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:46.506866  291455 cri.go:89] found id: ""
	I1212 01:38:46.506892  291455 logs.go:282] 0 containers: []
	W1212 01:38:46.506902  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:46.506908  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:46.506964  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:46.535109  291455 cri.go:89] found id: ""
	I1212 01:38:46.535185  291455 logs.go:282] 0 containers: []
	W1212 01:38:46.535208  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:46.535228  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:46.535312  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:46.559379  291455 cri.go:89] found id: ""
	I1212 01:38:46.559402  291455 logs.go:282] 0 containers: []
	W1212 01:38:46.559410  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:46.559417  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:46.559478  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:46.583642  291455 cri.go:89] found id: ""
	I1212 01:38:46.583717  291455 logs.go:282] 0 containers: []
	W1212 01:38:46.583738  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:46.583758  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:46.583842  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:46.608474  291455 cri.go:89] found id: ""
	I1212 01:38:46.608541  291455 logs.go:282] 0 containers: []
	W1212 01:38:46.608563  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:46.608578  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:46.608652  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:46.632905  291455 cri.go:89] found id: ""
	I1212 01:38:46.632982  291455 logs.go:282] 0 containers: []
	W1212 01:38:46.632997  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:46.633007  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:46.633018  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:46.689011  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:46.689048  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:46.702565  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:46.702592  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:46.772610  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:46.763145    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:46.764149    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:46.764820    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:46.766385    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:46.766678    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:46.763145    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:46.764149    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:46.764820    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:46.766385    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:46.766678    7533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:46.772629  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:46.772643  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:46.797690  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:46.797725  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:49.328051  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:49.341287  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:49.341360  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:49.378113  291455 cri.go:89] found id: ""
	I1212 01:38:49.378135  291455 logs.go:282] 0 containers: []
	W1212 01:38:49.378143  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:49.378149  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:49.378210  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:49.404269  291455 cri.go:89] found id: ""
	I1212 01:38:49.404291  291455 logs.go:282] 0 containers: []
	W1212 01:38:49.404300  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:49.404306  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:49.404364  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:49.428783  291455 cri.go:89] found id: ""
	I1212 01:38:49.428809  291455 logs.go:282] 0 containers: []
	W1212 01:38:49.428819  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:49.428825  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:49.428884  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:49.453856  291455 cri.go:89] found id: ""
	I1212 01:38:49.453889  291455 logs.go:282] 0 containers: []
	W1212 01:38:49.453898  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:49.453905  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:49.453965  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:49.480403  291455 cri.go:89] found id: ""
	I1212 01:38:49.480428  291455 logs.go:282] 0 containers: []
	W1212 01:38:49.480439  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:49.480445  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:49.480502  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:49.505527  291455 cri.go:89] found id: ""
	I1212 01:38:49.505594  291455 logs.go:282] 0 containers: []
	W1212 01:38:49.505617  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:49.505644  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:49.505740  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:49.529450  291455 cri.go:89] found id: ""
	I1212 01:38:49.529474  291455 logs.go:282] 0 containers: []
	W1212 01:38:49.529483  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:49.529489  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:49.529546  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:49.554349  291455 cri.go:89] found id: ""
	I1212 01:38:49.554412  291455 logs.go:282] 0 containers: []
	W1212 01:38:49.554435  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:49.554465  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:49.554493  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:49.611773  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:49.611805  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:49.625145  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:49.625169  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:49.689186  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:49.680639    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:49.681463    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:49.683157    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:49.683640    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:49.685287    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:49.680639    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:49.681463    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:49.683157    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:49.683640    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:49.685287    7647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:49.689208  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:49.689220  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:49.715241  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:49.715275  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:38:51.536523  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:53.536619  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:38:52.245578  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:52.255964  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:52.256032  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:52.288234  291455 cri.go:89] found id: ""
	I1212 01:38:52.288273  291455 logs.go:282] 0 containers: []
	W1212 01:38:52.288281  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:52.288287  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:52.288362  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:52.361726  291455 cri.go:89] found id: ""
	I1212 01:38:52.361756  291455 logs.go:282] 0 containers: []
	W1212 01:38:52.361765  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:52.361772  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:52.361848  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:52.390222  291455 cri.go:89] found id: ""
	I1212 01:38:52.390248  291455 logs.go:282] 0 containers: []
	W1212 01:38:52.390257  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:52.390262  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:52.390320  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:52.415677  291455 cri.go:89] found id: ""
	I1212 01:38:52.415712  291455 logs.go:282] 0 containers: []
	W1212 01:38:52.415721  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:52.415728  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:52.415796  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:52.440412  291455 cri.go:89] found id: ""
	I1212 01:38:52.440435  291455 logs.go:282] 0 containers: []
	W1212 01:38:52.440444  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:52.440450  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:52.440508  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:52.464172  291455 cri.go:89] found id: ""
	I1212 01:38:52.464203  291455 logs.go:282] 0 containers: []
	W1212 01:38:52.464212  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:52.464219  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:52.464278  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:52.496050  291455 cri.go:89] found id: ""
	I1212 01:38:52.496075  291455 logs.go:282] 0 containers: []
	W1212 01:38:52.496083  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:52.496089  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:52.496147  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:52.525249  291455 cri.go:89] found id: ""
	I1212 01:38:52.525271  291455 logs.go:282] 0 containers: []
	W1212 01:38:52.525279  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:52.525288  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:52.525299  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:52.580198  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:52.580233  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:52.593582  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:52.593648  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:52.659167  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:52.650803    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:52.651520    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:52.653182    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:52.653702    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:52.655438    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:52.650803    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:52.651520    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:52.653182    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:52.653702    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:52.655438    7760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:52.659187  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:52.659199  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:52.685268  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:52.685300  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:55.219025  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:55.229148  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:55.229222  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:55.252977  291455 cri.go:89] found id: ""
	I1212 01:38:55.253051  291455 logs.go:282] 0 containers: []
	W1212 01:38:55.253066  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:55.253077  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:55.253140  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:55.276881  291455 cri.go:89] found id: ""
	I1212 01:38:55.276945  291455 logs.go:282] 0 containers: []
	W1212 01:38:55.276959  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:55.276966  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:55.277024  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:55.316321  291455 cri.go:89] found id: ""
	I1212 01:38:55.316355  291455 logs.go:282] 0 containers: []
	W1212 01:38:55.316364  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:55.316370  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:55.316447  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:55.355675  291455 cri.go:89] found id: ""
	I1212 01:38:55.355703  291455 logs.go:282] 0 containers: []
	W1212 01:38:55.355711  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:55.355717  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:55.355791  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:55.394580  291455 cri.go:89] found id: ""
	I1212 01:38:55.394607  291455 logs.go:282] 0 containers: []
	W1212 01:38:55.394615  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:55.394621  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:55.394693  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:55.423340  291455 cri.go:89] found id: ""
	I1212 01:38:55.423363  291455 logs.go:282] 0 containers: []
	W1212 01:38:55.423371  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:55.423378  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:55.423436  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:55.447512  291455 cri.go:89] found id: ""
	I1212 01:38:55.447536  291455 logs.go:282] 0 containers: []
	W1212 01:38:55.447544  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:55.447550  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:55.447610  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:55.470830  291455 cri.go:89] found id: ""
	I1212 01:38:55.470853  291455 logs.go:282] 0 containers: []
	W1212 01:38:55.470867  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:55.470876  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:55.470886  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:55.528525  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:55.528561  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:38:55.541815  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:55.541843  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:55.605253  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:55.596889    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:55.597592    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:55.599233    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:55.599799    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:55.601358    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:55.596889    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:55.597592    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:55.599233    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:55.599799    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:55.601358    7871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:55.605280  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:55.605292  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:55.631237  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:55.631267  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:38:55.536688  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:38:58.036700  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:38:58.158753  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:38:58.169462  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:38:58.169546  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:38:58.194075  291455 cri.go:89] found id: ""
	I1212 01:38:58.194096  291455 logs.go:282] 0 containers: []
	W1212 01:38:58.194105  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:38:58.194111  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:38:58.194171  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:38:58.218468  291455 cri.go:89] found id: ""
	I1212 01:38:58.218546  291455 logs.go:282] 0 containers: []
	W1212 01:38:58.218569  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:38:58.218590  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:38:58.218675  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:38:58.242950  291455 cri.go:89] found id: ""
	I1212 01:38:58.242973  291455 logs.go:282] 0 containers: []
	W1212 01:38:58.242981  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:38:58.242987  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:38:58.243142  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:38:58.269403  291455 cri.go:89] found id: ""
	I1212 01:38:58.269423  291455 logs.go:282] 0 containers: []
	W1212 01:38:58.269432  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:38:58.269439  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:38:58.269502  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:38:58.317022  291455 cri.go:89] found id: ""
	I1212 01:38:58.317044  291455 logs.go:282] 0 containers: []
	W1212 01:38:58.317054  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:38:58.317059  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:38:58.317117  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:38:58.373414  291455 cri.go:89] found id: ""
	I1212 01:38:58.373486  291455 logs.go:282] 0 containers: []
	W1212 01:38:58.373511  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:38:58.373531  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:38:58.373619  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:38:58.404516  291455 cri.go:89] found id: ""
	I1212 01:38:58.404583  291455 logs.go:282] 0 containers: []
	W1212 01:38:58.404597  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:38:58.404604  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:38:58.404663  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:38:58.433096  291455 cri.go:89] found id: ""
	I1212 01:38:58.433120  291455 logs.go:282] 0 containers: []
	W1212 01:38:58.433131  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:38:58.433141  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:38:58.433170  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:38:58.495200  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:38:58.486845    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:58.487734    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:58.489310    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:58.489623    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:58.491296    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:38:58.486845    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:58.487734    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:58.489310    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:58.489623    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:38:58.491296    7976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:38:58.495223  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:38:58.495237  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:38:58.520595  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:38:58.520626  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:38:58.547636  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:38:58.547664  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:38:58.603945  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:38:58.603979  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:01.119071  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:01.130124  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:01.130196  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:01.155700  291455 cri.go:89] found id: ""
	I1212 01:39:01.155725  291455 logs.go:282] 0 containers: []
	W1212 01:39:01.155733  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:01.155740  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:01.155799  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:01.183985  291455 cri.go:89] found id: ""
	I1212 01:39:01.184012  291455 logs.go:282] 0 containers: []
	W1212 01:39:01.184021  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:01.184028  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:01.184095  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:01.211713  291455 cri.go:89] found id: ""
	I1212 01:39:01.211740  291455 logs.go:282] 0 containers: []
	W1212 01:39:01.211749  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:01.211756  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:01.211817  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:01.238159  291455 cri.go:89] found id: ""
	I1212 01:39:01.238185  291455 logs.go:282] 0 containers: []
	W1212 01:39:01.238195  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:01.238201  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:01.238265  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:01.264520  291455 cri.go:89] found id: ""
	I1212 01:39:01.264544  291455 logs.go:282] 0 containers: []
	W1212 01:39:01.264553  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:01.264560  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:01.264618  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:01.320162  291455 cri.go:89] found id: ""
	I1212 01:39:01.320191  291455 logs.go:282] 0 containers: []
	W1212 01:39:01.320200  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:01.320207  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:01.320276  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	W1212 01:39:00.536335  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:39:02.536671  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:39:05.036449  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:39:01.367993  291455 cri.go:89] found id: ""
	I1212 01:39:01.368020  291455 logs.go:282] 0 containers: []
	W1212 01:39:01.368029  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:01.368037  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:01.368107  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:01.395205  291455 cri.go:89] found id: ""
	I1212 01:39:01.395230  291455 logs.go:282] 0 containers: []
	W1212 01:39:01.395239  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:01.395248  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:01.395260  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:01.450970  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:01.451049  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:01.464511  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:01.464540  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:01.529452  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:01.521771    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:01.522386    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:01.523907    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:01.524217    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:01.525703    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:01.521771    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:01.522386    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:01.523907    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:01.524217    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:01.525703    8093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:01.529472  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:01.529484  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:01.553702  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:01.553734  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:04.082286  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:04.093237  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:04.093313  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:04.118261  291455 cri.go:89] found id: ""
	I1212 01:39:04.118283  291455 logs.go:282] 0 containers: []
	W1212 01:39:04.118292  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:04.118298  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:04.118360  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:04.147714  291455 cri.go:89] found id: ""
	I1212 01:39:04.147736  291455 logs.go:282] 0 containers: []
	W1212 01:39:04.147745  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:04.147751  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:04.147815  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:04.172999  291455 cri.go:89] found id: ""
	I1212 01:39:04.173023  291455 logs.go:282] 0 containers: []
	W1212 01:39:04.173032  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:04.173039  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:04.173101  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:04.197081  291455 cri.go:89] found id: ""
	I1212 01:39:04.197103  291455 logs.go:282] 0 containers: []
	W1212 01:39:04.197111  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:04.197119  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:04.197176  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:04.220639  291455 cri.go:89] found id: ""
	I1212 01:39:04.220665  291455 logs.go:282] 0 containers: []
	W1212 01:39:04.220674  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:04.220681  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:04.220746  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:04.248901  291455 cri.go:89] found id: ""
	I1212 01:39:04.248926  291455 logs.go:282] 0 containers: []
	W1212 01:39:04.248935  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:04.248944  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:04.249011  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:04.274064  291455 cri.go:89] found id: ""
	I1212 01:39:04.274085  291455 logs.go:282] 0 containers: []
	W1212 01:39:04.274093  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:04.274099  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:04.274161  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:04.332510  291455 cri.go:89] found id: ""
	I1212 01:39:04.332535  291455 logs.go:282] 0 containers: []
	W1212 01:39:04.332545  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:04.332555  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:04.332572  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:04.368151  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:04.368189  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:04.403091  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:04.403118  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:04.459000  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:04.459031  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:04.472281  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:04.472306  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:04.534979  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:04.526363    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:04.527054    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:04.528724    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:04.529233    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:04.530692    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:04.526363    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:04.527054    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:04.528724    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:04.529233    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:04.530692    8218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1212 01:39:07.036549  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:39:09.036731  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:39:07.035447  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:07.046244  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:07.046313  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:07.072737  291455 cri.go:89] found id: ""
	I1212 01:39:07.072761  291455 logs.go:282] 0 containers: []
	W1212 01:39:07.072770  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:07.072776  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:07.072835  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:07.097400  291455 cri.go:89] found id: ""
	I1212 01:39:07.097423  291455 logs.go:282] 0 containers: []
	W1212 01:39:07.097431  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:07.097438  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:07.097496  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:07.121464  291455 cri.go:89] found id: ""
	I1212 01:39:07.121486  291455 logs.go:282] 0 containers: []
	W1212 01:39:07.121495  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:07.121501  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:07.121584  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:07.145780  291455 cri.go:89] found id: ""
	I1212 01:39:07.145800  291455 logs.go:282] 0 containers: []
	W1212 01:39:07.145808  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:07.145814  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:07.145870  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:07.169997  291455 cri.go:89] found id: ""
	I1212 01:39:07.170018  291455 logs.go:282] 0 containers: []
	W1212 01:39:07.170027  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:07.170033  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:07.170091  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:07.195061  291455 cri.go:89] found id: ""
	I1212 01:39:07.195088  291455 logs.go:282] 0 containers: []
	W1212 01:39:07.195096  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:07.195103  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:07.195161  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:07.220294  291455 cri.go:89] found id: ""
	I1212 01:39:07.220317  291455 logs.go:282] 0 containers: []
	W1212 01:39:07.220325  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:07.220331  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:07.220389  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:07.245551  291455 cri.go:89] found id: ""
	I1212 01:39:07.245576  291455 logs.go:282] 0 containers: []
	W1212 01:39:07.245586  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:07.245595  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:07.245607  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:07.277493  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:07.277521  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:07.344946  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:07.347238  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:07.376690  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:07.376714  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:07.447695  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:07.438862    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:07.439591    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:07.441334    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:07.441943    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:07.443673    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:07.438862    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:07.439591    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:07.441334    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:07.441943    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:07.443673    8330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:07.447717  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:07.447730  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:09.974214  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:09.987839  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:09.987921  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:10.025371  291455 cri.go:89] found id: ""
	I1212 01:39:10.025397  291455 logs.go:282] 0 containers: []
	W1212 01:39:10.025407  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:10.025413  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:10.025477  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:10.051333  291455 cri.go:89] found id: ""
	I1212 01:39:10.051357  291455 logs.go:282] 0 containers: []
	W1212 01:39:10.051366  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:10.051371  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:10.051436  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:10.075263  291455 cri.go:89] found id: ""
	I1212 01:39:10.075289  291455 logs.go:282] 0 containers: []
	W1212 01:39:10.075298  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:10.075305  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:10.075364  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:10.103331  291455 cri.go:89] found id: ""
	I1212 01:39:10.103355  291455 logs.go:282] 0 containers: []
	W1212 01:39:10.103364  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:10.103370  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:10.103431  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:10.128706  291455 cri.go:89] found id: ""
	I1212 01:39:10.128730  291455 logs.go:282] 0 containers: []
	W1212 01:39:10.128739  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:10.128746  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:10.128802  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:10.154605  291455 cri.go:89] found id: ""
	I1212 01:39:10.154627  291455 logs.go:282] 0 containers: []
	W1212 01:39:10.154637  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:10.154644  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:10.154703  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:10.179767  291455 cri.go:89] found id: ""
	I1212 01:39:10.179791  291455 logs.go:282] 0 containers: []
	W1212 01:39:10.179800  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:10.179806  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:10.179864  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:10.208346  291455 cri.go:89] found id: ""
	I1212 01:39:10.208369  291455 logs.go:282] 0 containers: []
	W1212 01:39:10.208376  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:10.208386  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:10.208397  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:10.263848  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:10.263883  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:10.279969  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:10.279994  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:10.405176  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:10.396616    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:10.397197    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:10.398853    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:10.399595    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:10.401217    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:10.396616    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:10.397197    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:10.398853    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:10.399595    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:10.401217    8429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:10.405198  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:10.405210  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:10.431360  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:10.431398  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 01:39:11.536529  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:39:13.536580  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:39:12.959344  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:12.971541  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:12.971628  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:13.006786  291455 cri.go:89] found id: ""
	I1212 01:39:13.006815  291455 logs.go:282] 0 containers: []
	W1212 01:39:13.006824  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:13.006830  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:13.006903  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:13.032106  291455 cri.go:89] found id: ""
	I1212 01:39:13.032127  291455 logs.go:282] 0 containers: []
	W1212 01:39:13.032135  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:13.032141  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:13.032200  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:13.057432  291455 cri.go:89] found id: ""
	I1212 01:39:13.057454  291455 logs.go:282] 0 containers: []
	W1212 01:39:13.057463  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:13.057469  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:13.057529  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:13.082502  291455 cri.go:89] found id: ""
	I1212 01:39:13.082524  291455 logs.go:282] 0 containers: []
	W1212 01:39:13.082532  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:13.082538  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:13.082595  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:13.108199  291455 cri.go:89] found id: ""
	I1212 01:39:13.108272  291455 logs.go:282] 0 containers: []
	W1212 01:39:13.108295  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:13.108323  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:13.108433  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:13.134284  291455 cri.go:89] found id: ""
	I1212 01:39:13.134356  291455 logs.go:282] 0 containers: []
	W1212 01:39:13.134379  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:13.134398  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:13.134485  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:13.159517  291455 cri.go:89] found id: ""
	I1212 01:39:13.159541  291455 logs.go:282] 0 containers: []
	W1212 01:39:13.159550  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:13.159556  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:13.159614  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:13.183175  291455 cri.go:89] found id: ""
	I1212 01:39:13.183199  291455 logs.go:282] 0 containers: []
	W1212 01:39:13.183207  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:13.183216  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:13.183232  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:13.241174  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:13.241210  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:13.254849  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:13.254880  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:13.381552  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:13.373347    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:13.373888    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:13.375400    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:13.375820    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:13.377000    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:13.373347    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:13.373888    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:13.375400    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:13.375820    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:13.377000    8544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:13.381573  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:13.381586  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:13.406354  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:13.406385  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:15.933099  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:15.943596  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:15.943674  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:15.966960  291455 cri.go:89] found id: ""
	I1212 01:39:15.967014  291455 logs.go:282] 0 containers: []
	W1212 01:39:15.967023  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:15.967030  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:15.967090  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:15.996145  291455 cri.go:89] found id: ""
	I1212 01:39:15.996167  291455 logs.go:282] 0 containers: []
	W1212 01:39:15.996175  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:15.996182  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:15.996239  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:16.025152  291455 cri.go:89] found id: ""
	I1212 01:39:16.025175  291455 logs.go:282] 0 containers: []
	W1212 01:39:16.025183  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:16.025191  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:16.025248  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:16.050231  291455 cri.go:89] found id: ""
	I1212 01:39:16.050264  291455 logs.go:282] 0 containers: []
	W1212 01:39:16.050273  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:16.050279  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:16.050345  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:16.076929  291455 cri.go:89] found id: ""
	I1212 01:39:16.076958  291455 logs.go:282] 0 containers: []
	W1212 01:39:16.076967  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:16.076975  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:16.077054  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:16.102241  291455 cri.go:89] found id: ""
	I1212 01:39:16.102273  291455 logs.go:282] 0 containers: []
	W1212 01:39:16.102282  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:16.102304  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:16.102383  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:16.126239  291455 cri.go:89] found id: ""
	I1212 01:39:16.126302  291455 logs.go:282] 0 containers: []
	W1212 01:39:16.126324  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:16.126344  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:16.126417  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:16.151645  291455 cri.go:89] found id: ""
	I1212 01:39:16.151674  291455 logs.go:282] 0 containers: []
	W1212 01:39:16.151683  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:16.151692  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:16.151702  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:16.176852  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:16.176882  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:16.206720  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:16.206746  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:16.262653  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:16.262686  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:16.275603  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:16.275634  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:16.035987  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	W1212 01:39:18.036847  287206 node_ready.go:55] error getting node "no-preload-361053" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-361053": dial tcp 192.168.85.2:8443: connect: connection refused
	I1212 01:39:18.536213  287206 node_ready.go:38] duration metric: took 6m0.000908955s for node "no-preload-361053" to be "Ready" ...
	I1212 01:39:18.539274  287206 out.go:203] 
	W1212 01:39:18.542145  287206 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1212 01:39:18.542166  287206 out.go:285] * 
	W1212 01:39:18.544311  287206 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 01:39:18.547291  287206 out.go:203] 
	W1212 01:39:16.359325  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:16.351492    8667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:16.351974    8667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:16.353266    8667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:16.353666    8667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:16.355275    8667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:16.351492    8667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:16.351974    8667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:16.353266    8667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:16.353666    8667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:16.355275    8667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:18.859963  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:18.870960  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:18.871050  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:18.910477  291455 cri.go:89] found id: ""
	I1212 01:39:18.910504  291455 logs.go:282] 0 containers: []
	W1212 01:39:18.910513  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:18.910519  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:18.910580  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:18.935189  291455 cri.go:89] found id: ""
	I1212 01:39:18.935212  291455 logs.go:282] 0 containers: []
	W1212 01:39:18.935221  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:18.935226  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:18.935282  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:18.960848  291455 cri.go:89] found id: ""
	I1212 01:39:18.960874  291455 logs.go:282] 0 containers: []
	W1212 01:39:18.960883  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:18.960888  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:18.960945  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:18.999545  291455 cri.go:89] found id: ""
	I1212 01:39:18.999572  291455 logs.go:282] 0 containers: []
	W1212 01:39:18.999581  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:18.999594  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:18.999657  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:19.037306  291455 cri.go:89] found id: ""
	I1212 01:39:19.037333  291455 logs.go:282] 0 containers: []
	W1212 01:39:19.037341  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:19.037347  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:19.037405  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:19.076075  291455 cri.go:89] found id: ""
	I1212 01:39:19.076096  291455 logs.go:282] 0 containers: []
	W1212 01:39:19.076104  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:19.076114  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:19.076168  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:19.106494  291455 cri.go:89] found id: ""
	I1212 01:39:19.106515  291455 logs.go:282] 0 containers: []
	W1212 01:39:19.106524  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:19.106529  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:19.106586  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:19.133049  291455 cri.go:89] found id: ""
	I1212 01:39:19.133073  291455 logs.go:282] 0 containers: []
	W1212 01:39:19.133082  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:19.133090  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:19.133105  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:19.218096  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:19.208102    8760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:19.208898    8760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:19.210671    8760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:19.211009    8760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:19.214074    8760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:19.208102    8760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:19.208898    8760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:19.210671    8760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:19.211009    8760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:19.214074    8760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:19.218119  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:19.218140  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:19.246120  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:19.246155  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:19.279088  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:19.279116  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:19.436253  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:19.436340  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:21.952490  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:21.962606  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:21.962676  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:21.986826  291455 cri.go:89] found id: ""
	I1212 01:39:21.986851  291455 logs.go:282] 0 containers: []
	W1212 01:39:21.986859  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:21.986866  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:21.986923  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:22.014517  291455 cri.go:89] found id: ""
	I1212 01:39:22.014541  291455 logs.go:282] 0 containers: []
	W1212 01:39:22.014551  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:22.014557  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:22.014623  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:22.041526  291455 cri.go:89] found id: ""
	I1212 01:39:22.041552  291455 logs.go:282] 0 containers: []
	W1212 01:39:22.041561  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:22.041568  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:22.041633  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:22.067041  291455 cri.go:89] found id: ""
	I1212 01:39:22.067069  291455 logs.go:282] 0 containers: []
	W1212 01:39:22.067079  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:22.067086  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:22.067149  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:22.092937  291455 cri.go:89] found id: ""
	I1212 01:39:22.092973  291455 logs.go:282] 0 containers: []
	W1212 01:39:22.092982  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:22.092988  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:22.093059  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:22.122005  291455 cri.go:89] found id: ""
	I1212 01:39:22.122031  291455 logs.go:282] 0 containers: []
	W1212 01:39:22.122039  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:22.122045  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:22.122107  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:22.147474  291455 cri.go:89] found id: ""
	I1212 01:39:22.147500  291455 logs.go:282] 0 containers: []
	W1212 01:39:22.147508  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:22.147515  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:22.147577  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:22.177172  291455 cri.go:89] found id: ""
	I1212 01:39:22.177199  291455 logs.go:282] 0 containers: []
	W1212 01:39:22.177208  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:22.177219  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:22.177231  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:22.234049  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:22.234083  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:22.247594  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:22.247619  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:22.368443  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:22.359792    8879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:22.360617    8879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:22.362109    8879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:22.362602    8879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:22.364143    8879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:22.359792    8879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:22.360617    8879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:22.362109    8879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:22.362602    8879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:22.364143    8879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:22.368462  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:22.368485  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:22.393929  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:22.393963  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:24.924468  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:24.934611  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:24.934679  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:24.960488  291455 cri.go:89] found id: ""
	I1212 01:39:24.960510  291455 logs.go:282] 0 containers: []
	W1212 01:39:24.960519  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:24.960524  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:24.960580  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:24.985199  291455 cri.go:89] found id: ""
	I1212 01:39:24.985222  291455 logs.go:282] 0 containers: []
	W1212 01:39:24.985231  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:24.985238  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:24.985295  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:25.017557  291455 cri.go:89] found id: ""
	I1212 01:39:25.017583  291455 logs.go:282] 0 containers: []
	W1212 01:39:25.017594  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:25.017601  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:25.017673  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:25.043724  291455 cri.go:89] found id: ""
	I1212 01:39:25.043756  291455 logs.go:282] 0 containers: []
	W1212 01:39:25.043766  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:25.043773  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:25.043836  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:25.068913  291455 cri.go:89] found id: ""
	I1212 01:39:25.068941  291455 logs.go:282] 0 containers: []
	W1212 01:39:25.068951  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:25.068958  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:25.069021  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:25.094251  291455 cri.go:89] found id: ""
	I1212 01:39:25.094274  291455 logs.go:282] 0 containers: []
	W1212 01:39:25.094282  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:25.094288  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:25.094347  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:25.118452  291455 cri.go:89] found id: ""
	I1212 01:39:25.118530  291455 logs.go:282] 0 containers: []
	W1212 01:39:25.118554  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:25.118575  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:25.118691  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:25.143548  291455 cri.go:89] found id: ""
	I1212 01:39:25.143571  291455 logs.go:282] 0 containers: []
	W1212 01:39:25.143584  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:25.143594  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:25.143605  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:25.201626  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:25.201662  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:25.214871  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:25.214900  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:25.278860  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:25.271096    8993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:25.271605    8993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:25.273123    8993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:25.273537    8993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:25.275035    8993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:25.271096    8993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:25.271605    8993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:25.273123    8993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:25.273537    8993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:25.275035    8993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:25.278890  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:25.278903  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:25.313862  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:25.313902  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:27.877952  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:27.888461  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:27.888534  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:27.912285  291455 cri.go:89] found id: ""
	I1212 01:39:27.912308  291455 logs.go:282] 0 containers: []
	W1212 01:39:27.912317  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:27.912323  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:27.912382  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:27.936668  291455 cri.go:89] found id: ""
	I1212 01:39:27.936693  291455 logs.go:282] 0 containers: []
	W1212 01:39:27.936701  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:27.936707  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:27.936763  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:27.964911  291455 cri.go:89] found id: ""
	I1212 01:39:27.964936  291455 logs.go:282] 0 containers: []
	W1212 01:39:27.964945  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:27.964952  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:27.965011  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:27.988509  291455 cri.go:89] found id: ""
	I1212 01:39:27.988530  291455 logs.go:282] 0 containers: []
	W1212 01:39:27.988539  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:27.988545  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:27.988606  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:28.014439  291455 cri.go:89] found id: ""
	I1212 01:39:28.014461  291455 logs.go:282] 0 containers: []
	W1212 01:39:28.014469  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:28.014475  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:28.014542  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:28.040611  291455 cri.go:89] found id: ""
	I1212 01:39:28.040637  291455 logs.go:282] 0 containers: []
	W1212 01:39:28.040646  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:28.040652  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:28.040711  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:28.064823  291455 cri.go:89] found id: ""
	I1212 01:39:28.064844  291455 logs.go:282] 0 containers: []
	W1212 01:39:28.064852  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:28.064858  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:28.064922  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:28.089374  291455 cri.go:89] found id: ""
	I1212 01:39:28.089397  291455 logs.go:282] 0 containers: []
	W1212 01:39:28.089405  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:28.089414  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:28.089426  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:28.146024  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:28.146058  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:28.160130  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:28.160159  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:28.225838  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:28.217551    9107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:28.218334    9107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:28.219917    9107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:28.220532    9107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:28.222161    9107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:28.217551    9107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:28.218334    9107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:28.219917    9107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:28.220532    9107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:28.222161    9107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:28.225864  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:28.225878  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:28.250733  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:28.250768  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:30.798068  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:30.808169  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:30.808239  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:30.836768  291455 cri.go:89] found id: ""
	I1212 01:39:30.836789  291455 logs.go:282] 0 containers: []
	W1212 01:39:30.836798  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:30.836805  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:30.836863  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:30.860144  291455 cri.go:89] found id: ""
	I1212 01:39:30.860169  291455 logs.go:282] 0 containers: []
	W1212 01:39:30.860179  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:30.860185  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:30.860242  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:30.884081  291455 cri.go:89] found id: ""
	I1212 01:39:30.884107  291455 logs.go:282] 0 containers: []
	W1212 01:39:30.884116  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:30.884122  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:30.884180  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:30.908110  291455 cri.go:89] found id: ""
	I1212 01:39:30.908133  291455 logs.go:282] 0 containers: []
	W1212 01:39:30.908147  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:30.908153  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:30.908213  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:30.934406  291455 cri.go:89] found id: ""
	I1212 01:39:30.934428  291455 logs.go:282] 0 containers: []
	W1212 01:39:30.934436  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:30.934449  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:30.934507  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:30.962854  291455 cri.go:89] found id: ""
	I1212 01:39:30.962877  291455 logs.go:282] 0 containers: []
	W1212 01:39:30.962885  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:30.962891  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:30.962963  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:30.986340  291455 cri.go:89] found id: ""
	I1212 01:39:30.986366  291455 logs.go:282] 0 containers: []
	W1212 01:39:30.986375  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:30.986385  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:30.986447  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:31.021526  291455 cri.go:89] found id: ""
	I1212 01:39:31.021557  291455 logs.go:282] 0 containers: []
	W1212 01:39:31.021567  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:31.021576  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:31.021586  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:31.080147  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:31.080186  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:31.094865  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:31.094894  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:31.159994  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:31.150850    9220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:31.151532    9220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:31.153295    9220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:31.153811    9220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:31.156044    9220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:31.150850    9220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:31.151532    9220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:31.153295    9220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:31.153811    9220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:31.156044    9220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:31.160017  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:31.160030  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:31.187806  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:31.187844  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:33.721677  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:33.732122  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:33.732196  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:33.756604  291455 cri.go:89] found id: ""
	I1212 01:39:33.756627  291455 logs.go:282] 0 containers: []
	W1212 01:39:33.756636  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:33.756642  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:33.756703  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:33.782055  291455 cri.go:89] found id: ""
	I1212 01:39:33.782079  291455 logs.go:282] 0 containers: []
	W1212 01:39:33.782088  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:33.782094  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:33.782150  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:33.806217  291455 cri.go:89] found id: ""
	I1212 01:39:33.806242  291455 logs.go:282] 0 containers: []
	W1212 01:39:33.806250  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:33.806256  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:33.806313  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:33.829556  291455 cri.go:89] found id: ""
	I1212 01:39:33.829580  291455 logs.go:282] 0 containers: []
	W1212 01:39:33.829588  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:33.829595  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:33.829651  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:33.856222  291455 cri.go:89] found id: ""
	I1212 01:39:33.856251  291455 logs.go:282] 0 containers: []
	W1212 01:39:33.856259  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:33.856265  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:33.856323  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:33.886601  291455 cri.go:89] found id: ""
	I1212 01:39:33.886624  291455 logs.go:282] 0 containers: []
	W1212 01:39:33.886639  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:33.886646  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:33.886703  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:33.910597  291455 cri.go:89] found id: ""
	I1212 01:39:33.910621  291455 logs.go:282] 0 containers: []
	W1212 01:39:33.910630  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:33.910636  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:33.910701  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:33.934158  291455 cri.go:89] found id: ""
	I1212 01:39:33.934185  291455 logs.go:282] 0 containers: []
	W1212 01:39:33.934193  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:33.934202  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:33.934214  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:33.958501  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:33.958533  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:33.986448  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:33.986476  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:34.042064  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:34.042099  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:34.056951  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:34.056977  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:34.127667  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:34.120136    9345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:34.120757    9345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:34.121818    9345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:34.122189    9345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:34.123766    9345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:34.120136    9345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:34.120757    9345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:34.121818    9345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:34.122189    9345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:34.123766    9345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:36.628762  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:36.639233  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:36.639305  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:36.663673  291455 cri.go:89] found id: ""
	I1212 01:39:36.663698  291455 logs.go:282] 0 containers: []
	W1212 01:39:36.663706  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:36.663713  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:36.663793  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:36.688862  291455 cri.go:89] found id: ""
	I1212 01:39:36.688887  291455 logs.go:282] 0 containers: []
	W1212 01:39:36.688895  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:36.688901  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:36.688963  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:36.713353  291455 cri.go:89] found id: ""
	I1212 01:39:36.713377  291455 logs.go:282] 0 containers: []
	W1212 01:39:36.713386  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:36.713392  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:36.713451  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:36.740680  291455 cri.go:89] found id: ""
	I1212 01:39:36.740747  291455 logs.go:282] 0 containers: []
	W1212 01:39:36.740762  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:36.740769  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:36.740831  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:36.765582  291455 cri.go:89] found id: ""
	I1212 01:39:36.765657  291455 logs.go:282] 0 containers: []
	W1212 01:39:36.765679  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:36.765700  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:36.765788  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:36.790002  291455 cri.go:89] found id: ""
	I1212 01:39:36.790025  291455 logs.go:282] 0 containers: []
	W1212 01:39:36.790034  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:36.790040  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:36.790099  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:36.816694  291455 cri.go:89] found id: ""
	I1212 01:39:36.816717  291455 logs.go:282] 0 containers: []
	W1212 01:39:36.816728  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:36.816734  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:36.816793  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:36.845138  291455 cri.go:89] found id: ""
	I1212 01:39:36.845202  291455 logs.go:282] 0 containers: []
	W1212 01:39:36.845218  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:36.845229  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:36.845241  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:36.903016  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:36.903054  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:36.918866  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:36.918903  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:36.984787  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:36.976973    9447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:36.977356    9447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:36.978879    9447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:36.979250    9447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:36.980908    9447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:36.976973    9447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:36.977356    9447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:36.978879    9447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:36.979250    9447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:36.980908    9447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:36.984810  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:36.984821  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:37.009360  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:37.009399  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:39.548914  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:39.562684  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:39.562807  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:39.598338  291455 cri.go:89] found id: ""
	I1212 01:39:39.598363  291455 logs.go:282] 0 containers: []
	W1212 01:39:39.598372  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:39.598378  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:39.598435  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:39.636963  291455 cri.go:89] found id: ""
	I1212 01:39:39.636985  291455 logs.go:282] 0 containers: []
	W1212 01:39:39.636993  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:39.636999  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:39.637057  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:39.662905  291455 cri.go:89] found id: ""
	I1212 01:39:39.662928  291455 logs.go:282] 0 containers: []
	W1212 01:39:39.662936  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:39.662942  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:39.663047  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:39.686743  291455 cri.go:89] found id: ""
	I1212 01:39:39.686808  291455 logs.go:282] 0 containers: []
	W1212 01:39:39.686819  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:39.686826  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:39.686892  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:39.710381  291455 cri.go:89] found id: ""
	I1212 01:39:39.710452  291455 logs.go:282] 0 containers: []
	W1212 01:39:39.710476  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:39.710496  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:39.710581  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:39.734865  291455 cri.go:89] found id: ""
	I1212 01:39:39.734894  291455 logs.go:282] 0 containers: []
	W1212 01:39:39.734903  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:39.734910  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:39.735019  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:39.762787  291455 cri.go:89] found id: ""
	I1212 01:39:39.762813  291455 logs.go:282] 0 containers: []
	W1212 01:39:39.762822  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:39.762828  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:39.762940  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:39.788339  291455 cri.go:89] found id: ""
	I1212 01:39:39.788368  291455 logs.go:282] 0 containers: []
	W1212 01:39:39.788378  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:39.788388  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:39.788417  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:39.843014  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:39.843046  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:39.856565  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:39.856593  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:39.921611  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:39.913029    9558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:39.913865    9558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:39.915691    9558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:39.916129    9558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:39.917607    9558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:39.913029    9558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:39.913865    9558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:39.915691    9558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:39.916129    9558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:39.917607    9558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:39.921632  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:39.921644  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:39.948006  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:39.948039  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:42.479881  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:42.490524  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:42.490602  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:42.515563  291455 cri.go:89] found id: ""
	I1212 01:39:42.515641  291455 logs.go:282] 0 containers: []
	W1212 01:39:42.515656  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:42.515664  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:42.515725  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:42.560104  291455 cri.go:89] found id: ""
	I1212 01:39:42.560137  291455 logs.go:282] 0 containers: []
	W1212 01:39:42.560145  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:42.560152  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:42.560226  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:42.597091  291455 cri.go:89] found id: ""
	I1212 01:39:42.597131  291455 logs.go:282] 0 containers: []
	W1212 01:39:42.597140  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:42.597147  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:42.597219  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:42.629203  291455 cri.go:89] found id: ""
	I1212 01:39:42.629233  291455 logs.go:282] 0 containers: []
	W1212 01:39:42.629242  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:42.629248  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:42.629312  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:42.657935  291455 cri.go:89] found id: ""
	I1212 01:39:42.657959  291455 logs.go:282] 0 containers: []
	W1212 01:39:42.657968  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:42.657974  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:42.658039  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:42.684776  291455 cri.go:89] found id: ""
	I1212 01:39:42.684806  291455 logs.go:282] 0 containers: []
	W1212 01:39:42.684815  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:42.684822  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:42.684879  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:42.709384  291455 cri.go:89] found id: ""
	I1212 01:39:42.709419  291455 logs.go:282] 0 containers: []
	W1212 01:39:42.709429  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:42.709435  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:42.709505  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:42.733686  291455 cri.go:89] found id: ""
	I1212 01:39:42.733728  291455 logs.go:282] 0 containers: []
	W1212 01:39:42.733737  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:42.733747  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:42.733758  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:42.758552  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:42.758630  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:42.787823  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:42.787852  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:42.845099  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:42.845135  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:42.858856  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:42.858904  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:42.924089  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:42.915364    9684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:42.915778    9684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:42.917282    9684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:42.918788    9684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:42.920024    9684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:42.915364    9684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:42.915778    9684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:42.917282    9684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:42.918788    9684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:42.920024    9684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:45.424349  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:45.434772  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:45.434853  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:45.459272  291455 cri.go:89] found id: ""
	I1212 01:39:45.459297  291455 logs.go:282] 0 containers: []
	W1212 01:39:45.459306  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:45.459351  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:45.459482  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:45.485201  291455 cri.go:89] found id: ""
	I1212 01:39:45.485235  291455 logs.go:282] 0 containers: []
	W1212 01:39:45.485244  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:45.485266  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:45.485348  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:45.508997  291455 cri.go:89] found id: ""
	I1212 01:39:45.509022  291455 logs.go:282] 0 containers: []
	W1212 01:39:45.509031  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:45.509037  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:45.509094  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:45.533177  291455 cri.go:89] found id: ""
	I1212 01:39:45.533209  291455 logs.go:282] 0 containers: []
	W1212 01:39:45.533218  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:45.533224  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:45.533289  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:45.594513  291455 cri.go:89] found id: ""
	I1212 01:39:45.594538  291455 logs.go:282] 0 containers: []
	W1212 01:39:45.594546  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:45.594553  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:45.594617  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:45.627865  291455 cri.go:89] found id: ""
	I1212 01:39:45.627903  291455 logs.go:282] 0 containers: []
	W1212 01:39:45.627913  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:45.627919  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:45.627987  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:45.655026  291455 cri.go:89] found id: ""
	I1212 01:39:45.655049  291455 logs.go:282] 0 containers: []
	W1212 01:39:45.655058  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:45.655064  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:45.655127  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:45.680560  291455 cri.go:89] found id: ""
	I1212 01:39:45.680635  291455 logs.go:282] 0 containers: []
	W1212 01:39:45.680650  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:45.680660  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:45.680672  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:45.744860  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:45.736290    9776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:45.736804    9776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:45.738503    9776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:45.739012    9776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:45.740483    9776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:45.736290    9776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:45.736804    9776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:45.738503    9776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:45.739012    9776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:45.740483    9776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:45.744886  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:45.744908  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:45.770100  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:45.770135  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:45.797429  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:45.797455  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:45.853262  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:45.853296  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:48.367022  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:48.378069  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:48.378145  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:48.403921  291455 cri.go:89] found id: ""
	I1212 01:39:48.403943  291455 logs.go:282] 0 containers: []
	W1212 01:39:48.403952  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:48.403958  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:48.404016  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:48.428988  291455 cri.go:89] found id: ""
	I1212 01:39:48.429012  291455 logs.go:282] 0 containers: []
	W1212 01:39:48.429020  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:48.429027  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:48.429084  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:48.453105  291455 cri.go:89] found id: ""
	I1212 01:39:48.453128  291455 logs.go:282] 0 containers: []
	W1212 01:39:48.453137  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:48.453143  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:48.453201  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:48.477514  291455 cri.go:89] found id: ""
	I1212 01:39:48.477536  291455 logs.go:282] 0 containers: []
	W1212 01:39:48.477546  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:48.477551  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:48.477612  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:48.506708  291455 cri.go:89] found id: ""
	I1212 01:39:48.506730  291455 logs.go:282] 0 containers: []
	W1212 01:39:48.506738  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:48.506743  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:48.506801  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:48.546136  291455 cri.go:89] found id: ""
	I1212 01:39:48.546158  291455 logs.go:282] 0 containers: []
	W1212 01:39:48.546166  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:48.546172  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:48.546230  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:48.584757  291455 cri.go:89] found id: ""
	I1212 01:39:48.584778  291455 logs.go:282] 0 containers: []
	W1212 01:39:48.584787  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:48.584792  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:48.584860  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:48.624953  291455 cri.go:89] found id: ""
	I1212 01:39:48.624973  291455 logs.go:282] 0 containers: []
	W1212 01:39:48.624981  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:48.624989  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:48.625000  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:48.682582  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:48.682616  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:48.696819  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:48.696847  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:48.761964  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:48.752888    9897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:48.753667    9897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:48.755472    9897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:48.756105    9897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:48.757916    9897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:48.752888    9897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:48.753667    9897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:48.755472    9897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:48.756105    9897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:48.757916    9897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:48.761982  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:48.761994  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:48.787735  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:48.787766  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:51.315518  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:51.325805  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:51.325878  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:51.348771  291455 cri.go:89] found id: ""
	I1212 01:39:51.348797  291455 logs.go:282] 0 containers: []
	W1212 01:39:51.348806  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:51.348812  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:51.348892  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:51.372310  291455 cri.go:89] found id: ""
	I1212 01:39:51.372384  291455 logs.go:282] 0 containers: []
	W1212 01:39:51.372399  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:51.372406  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:51.372463  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:51.410822  291455 cri.go:89] found id: ""
	I1212 01:39:51.410855  291455 logs.go:282] 0 containers: []
	W1212 01:39:51.410865  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:51.410871  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:51.410935  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:51.434671  291455 cri.go:89] found id: ""
	I1212 01:39:51.434702  291455 logs.go:282] 0 containers: []
	W1212 01:39:51.434710  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:51.434716  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:51.434783  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:51.459981  291455 cri.go:89] found id: ""
	I1212 01:39:51.460054  291455 logs.go:282] 0 containers: []
	W1212 01:39:51.460070  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:51.460077  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:51.460134  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:51.484764  291455 cri.go:89] found id: ""
	I1212 01:39:51.484788  291455 logs.go:282] 0 containers: []
	W1212 01:39:51.484802  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:51.484808  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:51.484864  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:51.508943  291455 cri.go:89] found id: ""
	I1212 01:39:51.508966  291455 logs.go:282] 0 containers: []
	W1212 01:39:51.508974  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:51.508981  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:51.509040  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:51.553470  291455 cri.go:89] found id: ""
	I1212 01:39:51.553497  291455 logs.go:282] 0 containers: []
	W1212 01:39:51.553505  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:51.553514  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:51.553525  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:51.653146  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:51.645161   10003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:51.645898   10003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:51.647455   10003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:51.647736   10003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:51.649182   10003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:51.645161   10003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:51.645898   10003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:51.647455   10003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:51.647736   10003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:51.649182   10003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:51.653168  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:51.653179  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:51.679418  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:51.679450  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:51.709581  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:51.709607  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:51.764844  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:51.764878  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:54.280411  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:54.290776  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:54.290856  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:54.315212  291455 cri.go:89] found id: ""
	I1212 01:39:54.315236  291455 logs.go:282] 0 containers: []
	W1212 01:39:54.315246  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:54.315253  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:54.315311  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:54.339856  291455 cri.go:89] found id: ""
	I1212 01:39:54.339881  291455 logs.go:282] 0 containers: []
	W1212 01:39:54.339890  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:54.339896  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:54.339958  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:54.368679  291455 cri.go:89] found id: ""
	I1212 01:39:54.368702  291455 logs.go:282] 0 containers: []
	W1212 01:39:54.368711  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:54.368717  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:54.368776  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:54.393467  291455 cri.go:89] found id: ""
	I1212 01:39:54.393491  291455 logs.go:282] 0 containers: []
	W1212 01:39:54.393500  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:54.393507  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:54.393566  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:54.418691  291455 cri.go:89] found id: ""
	I1212 01:39:54.418713  291455 logs.go:282] 0 containers: []
	W1212 01:39:54.418722  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:54.418728  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:54.418785  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:54.444722  291455 cri.go:89] found id: ""
	I1212 01:39:54.444745  291455 logs.go:282] 0 containers: []
	W1212 01:39:54.444759  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:54.444766  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:54.444824  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:54.470007  291455 cri.go:89] found id: ""
	I1212 01:39:54.470029  291455 logs.go:282] 0 containers: []
	W1212 01:39:54.470037  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:54.470043  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:54.470104  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:54.494270  291455 cri.go:89] found id: ""
	I1212 01:39:54.494340  291455 logs.go:282] 0 containers: []
	W1212 01:39:54.494354  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:54.494364  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:54.494403  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:54.599318  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:54.577598   10110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:54.579503   10110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:54.592839   10110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:54.593570   10110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:54.595243   10110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:54.577598   10110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:54.579503   10110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:54.592839   10110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:54.593570   10110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:54.595243   10110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:54.599389  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:54.599417  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:39:54.630152  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:54.630190  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:54.658141  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:54.658167  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:54.713516  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:54.713551  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:57.227361  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:39:57.237887  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:39:57.237955  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:39:57.262202  291455 cri.go:89] found id: ""
	I1212 01:39:57.262227  291455 logs.go:282] 0 containers: []
	W1212 01:39:57.262236  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:39:57.262242  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:39:57.262299  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:39:57.287795  291455 cri.go:89] found id: ""
	I1212 01:39:57.287819  291455 logs.go:282] 0 containers: []
	W1212 01:39:57.287828  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:39:57.287834  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:39:57.287900  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:39:57.312347  291455 cri.go:89] found id: ""
	I1212 01:39:57.312372  291455 logs.go:282] 0 containers: []
	W1212 01:39:57.312381  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:39:57.312387  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:39:57.312448  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:39:57.340890  291455 cri.go:89] found id: ""
	I1212 01:39:57.340914  291455 logs.go:282] 0 containers: []
	W1212 01:39:57.340924  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:39:57.340930  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:39:57.340994  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:39:57.364578  291455 cri.go:89] found id: ""
	I1212 01:39:57.364643  291455 logs.go:282] 0 containers: []
	W1212 01:39:57.364658  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:39:57.364666  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:39:57.364735  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:39:57.389147  291455 cri.go:89] found id: ""
	I1212 01:39:57.389175  291455 logs.go:282] 0 containers: []
	W1212 01:39:57.389184  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:39:57.389191  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:39:57.389248  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:39:57.415275  291455 cri.go:89] found id: ""
	I1212 01:39:57.415300  291455 logs.go:282] 0 containers: []
	W1212 01:39:57.415315  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:39:57.415322  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:39:57.415385  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:39:57.440087  291455 cri.go:89] found id: ""
	I1212 01:39:57.440109  291455 logs.go:282] 0 containers: []
	W1212 01:39:57.440118  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:39:57.440127  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:39:57.440138  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:39:57.467124  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:39:57.467150  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:39:57.522232  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:39:57.522269  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:39:57.538082  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:39:57.538160  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:39:57.643552  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:39:57.631917   10245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:57.632540   10245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:57.634329   10245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:57.634855   10245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:57.636638   10245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:39:57.631917   10245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:57.632540   10245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:57.634329   10245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:57.634855   10245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:39:57.636638   10245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:39:57.643574  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:39:57.643586  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:00.169313  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:00.228741  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:00.228823  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:00.303835  291455 cri.go:89] found id: ""
	I1212 01:40:00.305186  291455 logs.go:282] 0 containers: []
	W1212 01:40:00.305353  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:00.309177  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:00.309371  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:00.363791  291455 cri.go:89] found id: ""
	I1212 01:40:00.363817  291455 logs.go:282] 0 containers: []
	W1212 01:40:00.363826  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:00.363832  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:00.363904  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:00.428687  291455 cri.go:89] found id: ""
	I1212 01:40:00.428710  291455 logs.go:282] 0 containers: []
	W1212 01:40:00.428720  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:00.428727  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:00.428821  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:00.471696  291455 cri.go:89] found id: ""
	I1212 01:40:00.471723  291455 logs.go:282] 0 containers: []
	W1212 01:40:00.471732  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:00.471740  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:00.471820  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:00.509321  291455 cri.go:89] found id: ""
	I1212 01:40:00.509347  291455 logs.go:282] 0 containers: []
	W1212 01:40:00.509372  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:00.509381  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:00.509460  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:00.593692  291455 cri.go:89] found id: ""
	I1212 01:40:00.593716  291455 logs.go:282] 0 containers: []
	W1212 01:40:00.593725  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:00.593732  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:00.593800  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:00.662781  291455 cri.go:89] found id: ""
	I1212 01:40:00.662804  291455 logs.go:282] 0 containers: []
	W1212 01:40:00.662813  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:00.662819  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:00.662912  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:00.689999  291455 cri.go:89] found id: ""
	I1212 01:40:00.690023  291455 logs.go:282] 0 containers: []
	W1212 01:40:00.690031  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:00.690041  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:00.690053  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:00.747296  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:00.747331  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:00.761427  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:00.761454  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:00.828444  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:00.819830   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:00.820596   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:00.822241   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:00.822754   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:00.824365   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:00.819830   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:00.820596   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:00.822241   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:00.822754   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:00.824365   10350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:00.828466  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:00.828479  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:00.855218  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:00.855254  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:03.387867  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:03.398566  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:03.398659  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:03.427352  291455 cri.go:89] found id: ""
	I1212 01:40:03.427376  291455 logs.go:282] 0 containers: []
	W1212 01:40:03.427385  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:03.427391  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:03.427456  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:03.451979  291455 cri.go:89] found id: ""
	I1212 01:40:03.452054  291455 logs.go:282] 0 containers: []
	W1212 01:40:03.452069  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:03.452076  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:03.452150  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:03.475705  291455 cri.go:89] found id: ""
	I1212 01:40:03.475729  291455 logs.go:282] 0 containers: []
	W1212 01:40:03.475739  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:03.475744  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:03.475831  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:03.500258  291455 cri.go:89] found id: ""
	I1212 01:40:03.500283  291455 logs.go:282] 0 containers: []
	W1212 01:40:03.500293  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:03.500300  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:03.500360  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:03.528939  291455 cri.go:89] found id: ""
	I1212 01:40:03.528962  291455 logs.go:282] 0 containers: []
	W1212 01:40:03.528971  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:03.528976  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:03.529037  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:03.557541  291455 cri.go:89] found id: ""
	I1212 01:40:03.557566  291455 logs.go:282] 0 containers: []
	W1212 01:40:03.557575  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:03.557581  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:03.557645  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:03.611801  291455 cri.go:89] found id: ""
	I1212 01:40:03.611827  291455 logs.go:282] 0 containers: []
	W1212 01:40:03.611837  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:03.611843  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:03.611906  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:03.641008  291455 cri.go:89] found id: ""
	I1212 01:40:03.641034  291455 logs.go:282] 0 containers: []
	W1212 01:40:03.641043  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:03.641053  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:03.641064  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:03.696830  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:03.696868  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:03.710227  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:03.710256  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:03.777119  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:03.769143   10461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:03.769540   10461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:03.771066   10461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:03.771655   10461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:03.773341   10461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:03.769143   10461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:03.769540   10461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:03.771066   10461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:03.771655   10461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:03.773341   10461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:03.777184  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:03.777203  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:03.802465  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:03.802497  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:06.331826  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:06.342482  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:06.342547  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:06.366505  291455 cri.go:89] found id: ""
	I1212 01:40:06.366527  291455 logs.go:282] 0 containers: []
	W1212 01:40:06.366536  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:06.366542  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:06.366599  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:06.391672  291455 cri.go:89] found id: ""
	I1212 01:40:06.391696  291455 logs.go:282] 0 containers: []
	W1212 01:40:06.391705  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:06.391711  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:06.391774  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:06.416914  291455 cri.go:89] found id: ""
	I1212 01:40:06.416941  291455 logs.go:282] 0 containers: []
	W1212 01:40:06.416950  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:06.416956  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:06.417031  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:06.441562  291455 cri.go:89] found id: ""
	I1212 01:40:06.441584  291455 logs.go:282] 0 containers: []
	W1212 01:40:06.441599  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:06.441606  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:06.441665  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:06.469918  291455 cri.go:89] found id: ""
	I1212 01:40:06.469942  291455 logs.go:282] 0 containers: []
	W1212 01:40:06.469951  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:06.469957  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:06.470014  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:06.494455  291455 cri.go:89] found id: ""
	I1212 01:40:06.494478  291455 logs.go:282] 0 containers: []
	W1212 01:40:06.494487  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:06.494503  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:06.494579  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:06.520013  291455 cri.go:89] found id: ""
	I1212 01:40:06.520037  291455 logs.go:282] 0 containers: []
	W1212 01:40:06.520046  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:06.520052  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:06.520108  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:06.571478  291455 cri.go:89] found id: ""
	I1212 01:40:06.571509  291455 logs.go:282] 0 containers: []
	W1212 01:40:06.571518  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:06.571528  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:06.571539  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:06.616555  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:06.616594  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:06.657561  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:06.657589  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:06.715328  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:06.715409  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:06.728591  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:06.728620  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:06.792104  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:06.783643   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:06.784436   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:06.785957   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:06.786254   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:06.787744   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:06.783643   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:06.784436   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:06.785957   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:06.786254   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:06.787744   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:09.292912  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:09.303462  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:09.303537  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:09.329031  291455 cri.go:89] found id: ""
	I1212 01:40:09.329057  291455 logs.go:282] 0 containers: []
	W1212 01:40:09.329066  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:09.329072  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:09.329188  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:09.353474  291455 cri.go:89] found id: ""
	I1212 01:40:09.353498  291455 logs.go:282] 0 containers: []
	W1212 01:40:09.353507  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:09.353513  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:09.353570  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:09.380805  291455 cri.go:89] found id: ""
	I1212 01:40:09.380830  291455 logs.go:282] 0 containers: []
	W1212 01:40:09.380839  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:09.380845  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:09.380959  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:09.408831  291455 cri.go:89] found id: ""
	I1212 01:40:09.408854  291455 logs.go:282] 0 containers: []
	W1212 01:40:09.408862  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:09.408868  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:09.408943  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:09.433352  291455 cri.go:89] found id: ""
	I1212 01:40:09.433374  291455 logs.go:282] 0 containers: []
	W1212 01:40:09.433383  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:09.433389  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:09.433450  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:09.458129  291455 cri.go:89] found id: ""
	I1212 01:40:09.458149  291455 logs.go:282] 0 containers: []
	W1212 01:40:09.458158  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:09.458165  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:09.458222  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:09.484528  291455 cri.go:89] found id: ""
	I1212 01:40:09.484552  291455 logs.go:282] 0 containers: []
	W1212 01:40:09.484560  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:09.484567  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:09.484624  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:09.512777  291455 cri.go:89] found id: ""
	I1212 01:40:09.512802  291455 logs.go:282] 0 containers: []
	W1212 01:40:09.512811  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:09.512820  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:09.512831  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:09.563517  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:09.563545  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:09.660558  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:09.660595  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:09.674516  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:09.674541  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:09.738215  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:09.730040   10697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:09.730861   10697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:09.732394   10697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:09.732881   10697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:09.734347   10697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:09.730040   10697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:09.730861   10697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:09.732394   10697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:09.732881   10697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:09.734347   10697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:09.738241  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:09.738253  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:12.263748  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:12.273959  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:12.274029  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:12.297055  291455 cri.go:89] found id: ""
	I1212 01:40:12.297087  291455 logs.go:282] 0 containers: []
	W1212 01:40:12.297096  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:12.297118  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:12.297179  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:12.322284  291455 cri.go:89] found id: ""
	I1212 01:40:12.322308  291455 logs.go:282] 0 containers: []
	W1212 01:40:12.322317  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:12.322323  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:12.322397  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:12.345905  291455 cri.go:89] found id: ""
	I1212 01:40:12.345929  291455 logs.go:282] 0 containers: []
	W1212 01:40:12.345938  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:12.345944  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:12.346024  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:12.370571  291455 cri.go:89] found id: ""
	I1212 01:40:12.370593  291455 logs.go:282] 0 containers: []
	W1212 01:40:12.370602  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:12.370608  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:12.370695  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:12.397426  291455 cri.go:89] found id: ""
	I1212 01:40:12.397473  291455 logs.go:282] 0 containers: []
	W1212 01:40:12.397495  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:12.397514  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:12.397602  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:12.426531  291455 cri.go:89] found id: ""
	I1212 01:40:12.426556  291455 logs.go:282] 0 containers: []
	W1212 01:40:12.426564  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:12.426571  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:12.426644  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:12.450837  291455 cri.go:89] found id: ""
	I1212 01:40:12.450864  291455 logs.go:282] 0 containers: []
	W1212 01:40:12.450874  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:12.450882  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:12.450941  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:12.475392  291455 cri.go:89] found id: ""
	I1212 01:40:12.475415  291455 logs.go:282] 0 containers: []
	W1212 01:40:12.475423  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:12.475433  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:12.475443  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:12.500596  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:12.500630  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:12.539878  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:12.539912  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:12.636980  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:12.637024  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:12.651233  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:12.651261  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:12.719321  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:12.710320   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:12.711168   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:12.712905   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:12.713556   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:12.715342   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:12.710320   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:12.711168   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:12.712905   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:12.713556   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:12.715342   10812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:15.219607  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:15.230736  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:15.230837  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:15.255192  291455 cri.go:89] found id: ""
	I1212 01:40:15.255216  291455 logs.go:282] 0 containers: []
	W1212 01:40:15.255225  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:15.255250  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:15.255312  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:15.280065  291455 cri.go:89] found id: ""
	I1212 01:40:15.280088  291455 logs.go:282] 0 containers: []
	W1212 01:40:15.280097  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:15.280103  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:15.280182  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:15.305428  291455 cri.go:89] found id: ""
	I1212 01:40:15.305451  291455 logs.go:282] 0 containers: []
	W1212 01:40:15.305460  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:15.305467  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:15.305533  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:15.329513  291455 cri.go:89] found id: ""
	I1212 01:40:15.329537  291455 logs.go:282] 0 containers: []
	W1212 01:40:15.329545  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:15.329552  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:15.329612  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:15.353724  291455 cri.go:89] found id: ""
	I1212 01:40:15.353748  291455 logs.go:282] 0 containers: []
	W1212 01:40:15.353757  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:15.353764  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:15.353821  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:15.379891  291455 cri.go:89] found id: ""
	I1212 01:40:15.379921  291455 logs.go:282] 0 containers: []
	W1212 01:40:15.379930  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:15.379936  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:15.379994  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:15.410206  291455 cri.go:89] found id: ""
	I1212 01:40:15.410232  291455 logs.go:282] 0 containers: []
	W1212 01:40:15.410242  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:15.410249  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:15.410308  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:15.436574  291455 cri.go:89] found id: ""
	I1212 01:40:15.436607  291455 logs.go:282] 0 containers: []
	W1212 01:40:15.436616  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:15.436628  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:15.436640  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:15.496631  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:15.496672  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:15.511586  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:15.511614  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:15.643166  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:15.635198   10905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:15.635698   10905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:15.637279   10905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:15.637833   10905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:15.639441   10905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:15.635198   10905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:15.635698   10905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:15.637279   10905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:15.637833   10905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:15.639441   10905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:15.643192  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:15.643208  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:15.668006  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:15.668044  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:18.199232  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:18.210162  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:18.210237  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:18.235304  291455 cri.go:89] found id: ""
	I1212 01:40:18.235330  291455 logs.go:282] 0 containers: []
	W1212 01:40:18.235339  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:18.235347  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:18.235412  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:18.261126  291455 cri.go:89] found id: ""
	I1212 01:40:18.261149  291455 logs.go:282] 0 containers: []
	W1212 01:40:18.261157  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:18.261163  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:18.261225  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:18.285920  291455 cri.go:89] found id: ""
	I1212 01:40:18.285946  291455 logs.go:282] 0 containers: []
	W1212 01:40:18.285954  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:18.285961  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:18.286056  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:18.310447  291455 cri.go:89] found id: ""
	I1212 01:40:18.310490  291455 logs.go:282] 0 containers: []
	W1212 01:40:18.310500  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:18.310523  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:18.310601  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:18.334613  291455 cri.go:89] found id: ""
	I1212 01:40:18.334643  291455 logs.go:282] 0 containers: []
	W1212 01:40:18.334653  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:18.334659  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:18.334725  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:18.363763  291455 cri.go:89] found id: ""
	I1212 01:40:18.363787  291455 logs.go:282] 0 containers: []
	W1212 01:40:18.363797  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:18.363803  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:18.363864  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:18.389696  291455 cri.go:89] found id: ""
	I1212 01:40:18.389730  291455 logs.go:282] 0 containers: []
	W1212 01:40:18.389739  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:18.389745  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:18.389812  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:18.416961  291455 cri.go:89] found id: ""
	I1212 01:40:18.417035  291455 logs.go:282] 0 containers: []
	W1212 01:40:18.417059  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:18.417077  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:18.417104  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:18.474235  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:18.474268  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:18.487640  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:18.487666  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:18.567561  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:18.554594   11020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:18.555595   11020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:18.560540   11020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:18.560843   11020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:18.562408   11020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:18.554594   11020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:18.555595   11020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:18.560540   11020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:18.560843   11020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:18.562408   11020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:18.567584  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:18.567597  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:18.597523  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:18.597557  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:21.132296  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:21.142685  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:21.142760  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:21.171993  291455 cri.go:89] found id: ""
	I1212 01:40:21.172020  291455 logs.go:282] 0 containers: []
	W1212 01:40:21.172029  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:21.172035  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:21.172096  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:21.195907  291455 cri.go:89] found id: ""
	I1212 01:40:21.195929  291455 logs.go:282] 0 containers: []
	W1212 01:40:21.195938  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:21.195944  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:21.196007  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:21.219496  291455 cri.go:89] found id: ""
	I1212 01:40:21.219524  291455 logs.go:282] 0 containers: []
	W1212 01:40:21.219533  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:21.219540  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:21.219601  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:21.243807  291455 cri.go:89] found id: ""
	I1212 01:40:21.243834  291455 logs.go:282] 0 containers: []
	W1212 01:40:21.243844  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:21.243850  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:21.243910  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:21.268956  291455 cri.go:89] found id: ""
	I1212 01:40:21.268977  291455 logs.go:282] 0 containers: []
	W1212 01:40:21.268986  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:21.268993  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:21.269052  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:21.297557  291455 cri.go:89] found id: ""
	I1212 01:40:21.297580  291455 logs.go:282] 0 containers: []
	W1212 01:40:21.297588  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:21.297595  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:21.297652  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:21.321755  291455 cri.go:89] found id: ""
	I1212 01:40:21.321776  291455 logs.go:282] 0 containers: []
	W1212 01:40:21.321791  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:21.321798  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:21.321861  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:21.349054  291455 cri.go:89] found id: ""
	I1212 01:40:21.349076  291455 logs.go:282] 0 containers: []
	W1212 01:40:21.349085  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:21.349094  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:21.349108  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:21.374597  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:21.374636  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:21.403444  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:21.403469  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:21.461656  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:21.461690  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:21.475293  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:21.475320  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:21.560836  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:21.545907   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:21.546745   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:21.548429   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:21.548732   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:21.550543   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:21.545907   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:21.546745   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:21.548429   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:21.548732   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:21.550543   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:24.061094  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:24.071831  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:24.071913  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:24.097936  291455 cri.go:89] found id: ""
	I1212 01:40:24.097962  291455 logs.go:282] 0 containers: []
	W1212 01:40:24.097971  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:24.097978  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:24.098036  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:24.127785  291455 cri.go:89] found id: ""
	I1212 01:40:24.127809  291455 logs.go:282] 0 containers: []
	W1212 01:40:24.127819  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:24.127826  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:24.127889  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:24.153026  291455 cri.go:89] found id: ""
	I1212 01:40:24.153052  291455 logs.go:282] 0 containers: []
	W1212 01:40:24.153063  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:24.153068  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:24.153127  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:24.176972  291455 cri.go:89] found id: ""
	I1212 01:40:24.176997  291455 logs.go:282] 0 containers: []
	W1212 01:40:24.177006  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:24.177013  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:24.177073  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:24.213590  291455 cri.go:89] found id: ""
	I1212 01:40:24.213614  291455 logs.go:282] 0 containers: []
	W1212 01:40:24.213623  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:24.213638  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:24.213696  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:24.241058  291455 cri.go:89] found id: ""
	I1212 01:40:24.241084  291455 logs.go:282] 0 containers: []
	W1212 01:40:24.241092  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:24.241099  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:24.241158  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:24.265936  291455 cri.go:89] found id: ""
	I1212 01:40:24.265977  291455 logs.go:282] 0 containers: []
	W1212 01:40:24.265985  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:24.265991  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:24.266050  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:24.289751  291455 cri.go:89] found id: ""
	I1212 01:40:24.289779  291455 logs.go:282] 0 containers: []
	W1212 01:40:24.289788  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:24.289798  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:24.289809  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:24.316973  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:24.316999  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:24.372346  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:24.372380  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:24.385931  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:24.385960  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:24.453792  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:24.445261   11257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:24.445682   11257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:24.447332   11257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:24.447939   11257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:24.449784   11257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:24.445261   11257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:24.445682   11257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:24.447332   11257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:24.447939   11257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:24.449784   11257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:24.453813  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:24.453826  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:26.980134  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:26.991597  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:26.991671  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:27.019040  291455 cri.go:89] found id: ""
	I1212 01:40:27.019064  291455 logs.go:282] 0 containers: []
	W1212 01:40:27.019073  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:27.019080  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:27.019154  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:27.046812  291455 cri.go:89] found id: ""
	I1212 01:40:27.046841  291455 logs.go:282] 0 containers: []
	W1212 01:40:27.046854  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:27.046860  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:27.046968  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:27.071383  291455 cri.go:89] found id: ""
	I1212 01:40:27.071405  291455 logs.go:282] 0 containers: []
	W1212 01:40:27.071414  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:27.071420  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:27.071490  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:27.095638  291455 cri.go:89] found id: ""
	I1212 01:40:27.095663  291455 logs.go:282] 0 containers: []
	W1212 01:40:27.095672  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:27.095678  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:27.095755  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:27.119028  291455 cri.go:89] found id: ""
	I1212 01:40:27.119050  291455 logs.go:282] 0 containers: []
	W1212 01:40:27.119059  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:27.119064  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:27.119123  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:27.143722  291455 cri.go:89] found id: ""
	I1212 01:40:27.143748  291455 logs.go:282] 0 containers: []
	W1212 01:40:27.143757  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:27.143763  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:27.143839  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:27.167989  291455 cri.go:89] found id: ""
	I1212 01:40:27.168066  291455 logs.go:282] 0 containers: []
	W1212 01:40:27.168088  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:27.168097  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:27.168168  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:27.193229  291455 cri.go:89] found id: ""
	I1212 01:40:27.193269  291455 logs.go:282] 0 containers: []
	W1212 01:40:27.193279  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:27.193289  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:27.193304  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:27.248752  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:27.248788  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:27.262591  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:27.262627  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:27.329086  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:27.321229   11360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:27.321673   11360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:27.323243   11360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:27.323775   11360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:27.325351   11360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:27.321229   11360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:27.321673   11360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:27.323243   11360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:27.323775   11360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:27.325351   11360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:27.329111  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:27.329123  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:27.354405  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:27.354442  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:29.885003  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:29.896299  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:29.896378  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:29.922911  291455 cri.go:89] found id: ""
	I1212 01:40:29.922945  291455 logs.go:282] 0 containers: []
	W1212 01:40:29.922954  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:29.922961  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:29.923063  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:29.949238  291455 cri.go:89] found id: ""
	I1212 01:40:29.949264  291455 logs.go:282] 0 containers: []
	W1212 01:40:29.949273  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:29.949280  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:29.949338  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:29.974510  291455 cri.go:89] found id: ""
	I1212 01:40:29.974536  291455 logs.go:282] 0 containers: []
	W1212 01:40:29.974545  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:29.974551  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:29.974608  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:29.999116  291455 cri.go:89] found id: ""
	I1212 01:40:29.999142  291455 logs.go:282] 0 containers: []
	W1212 01:40:29.999151  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:29.999157  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:29.999223  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:30.078011  291455 cri.go:89] found id: ""
	I1212 01:40:30.078040  291455 logs.go:282] 0 containers: []
	W1212 01:40:30.078050  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:30.078058  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:30.078132  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:30.105966  291455 cri.go:89] found id: ""
	I1212 01:40:30.105993  291455 logs.go:282] 0 containers: []
	W1212 01:40:30.106003  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:30.106010  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:30.106078  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:30.134703  291455 cri.go:89] found id: ""
	I1212 01:40:30.134726  291455 logs.go:282] 0 containers: []
	W1212 01:40:30.134735  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:30.134780  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:30.134874  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:30.161984  291455 cri.go:89] found id: ""
	I1212 01:40:30.162009  291455 logs.go:282] 0 containers: []
	W1212 01:40:30.162018  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:30.162028  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:30.162039  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:30.193075  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:30.193103  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:30.252472  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:30.252508  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:30.266246  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:30.266276  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:30.333852  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:30.325323   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:30.325890   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:30.327426   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:30.327865   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:30.329291   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:30.325323   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:30.325890   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:30.327426   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:30.327865   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:30.329291   11485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:30.333874  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:30.333886  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:32.860948  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:32.872085  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:32.872163  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:32.901386  291455 cri.go:89] found id: ""
	I1212 01:40:32.901410  291455 logs.go:282] 0 containers: []
	W1212 01:40:32.901425  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:32.901438  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:32.901499  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:32.926818  291455 cri.go:89] found id: ""
	I1212 01:40:32.926844  291455 logs.go:282] 0 containers: []
	W1212 01:40:32.926853  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:32.926859  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:32.926927  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:32.956149  291455 cri.go:89] found id: ""
	I1212 01:40:32.956187  291455 logs.go:282] 0 containers: []
	W1212 01:40:32.956196  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:32.956202  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:32.956259  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:32.988134  291455 cri.go:89] found id: ""
	I1212 01:40:32.988159  291455 logs.go:282] 0 containers: []
	W1212 01:40:32.988168  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:32.988174  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:32.988231  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:33.014432  291455 cri.go:89] found id: ""
	I1212 01:40:33.014459  291455 logs.go:282] 0 containers: []
	W1212 01:40:33.014468  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:33.014474  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:33.014534  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:33.039814  291455 cri.go:89] found id: ""
	I1212 01:40:33.039843  291455 logs.go:282] 0 containers: []
	W1212 01:40:33.039852  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:33.039859  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:33.039921  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:33.068378  291455 cri.go:89] found id: ""
	I1212 01:40:33.068401  291455 logs.go:282] 0 containers: []
	W1212 01:40:33.068410  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:33.068417  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:33.068475  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:33.097661  291455 cri.go:89] found id: ""
	I1212 01:40:33.097725  291455 logs.go:282] 0 containers: []
	W1212 01:40:33.097750  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:33.097775  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:33.097803  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:33.129775  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:33.129802  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:33.189298  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:33.189332  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:33.202981  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:33.203026  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:33.264626  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:33.256228   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:33.256801   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:33.258449   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:33.259112   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:33.260717   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:33.256228   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:33.256801   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:33.258449   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:33.259112   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:33.260717   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:33.264648  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:33.264665  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:35.791109  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:35.807877  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:35.807951  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:35.851416  291455 cri.go:89] found id: ""
	I1212 01:40:35.851442  291455 logs.go:282] 0 containers: []
	W1212 01:40:35.851450  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:35.851456  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:35.851518  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:35.888920  291455 cri.go:89] found id: ""
	I1212 01:40:35.888943  291455 logs.go:282] 0 containers: []
	W1212 01:40:35.888952  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:35.888958  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:35.889018  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:35.915592  291455 cri.go:89] found id: ""
	I1212 01:40:35.915618  291455 logs.go:282] 0 containers: []
	W1212 01:40:35.915628  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:35.915634  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:35.915715  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:35.939272  291455 cri.go:89] found id: ""
	I1212 01:40:35.939296  291455 logs.go:282] 0 containers: []
	W1212 01:40:35.939305  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:35.939311  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:35.939370  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:35.968216  291455 cri.go:89] found id: ""
	I1212 01:40:35.968244  291455 logs.go:282] 0 containers: []
	W1212 01:40:35.968253  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:35.968259  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:35.968317  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:35.993761  291455 cri.go:89] found id: ""
	I1212 01:40:35.993785  291455 logs.go:282] 0 containers: []
	W1212 01:40:35.993796  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:35.993803  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:35.993863  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:36.022585  291455 cri.go:89] found id: ""
	I1212 01:40:36.022612  291455 logs.go:282] 0 containers: []
	W1212 01:40:36.022633  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:36.022640  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:36.022712  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:36.052933  291455 cri.go:89] found id: ""
	I1212 01:40:36.052955  291455 logs.go:282] 0 containers: []
	W1212 01:40:36.052965  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:36.052974  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:36.052991  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:36.122317  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:36.113883   11700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:36.114412   11700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:36.115894   11700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:36.116408   11700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:36.118260   11700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:36.113883   11700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:36.114412   11700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:36.115894   11700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:36.116408   11700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:36.118260   11700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:36.122340  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:36.122353  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:36.146907  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:36.146940  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:36.174411  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:36.174444  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:36.229229  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:36.229259  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:38.742843  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:38.753061  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:38.753132  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:38.777998  291455 cri.go:89] found id: ""
	I1212 01:40:38.778024  291455 logs.go:282] 0 containers: []
	W1212 01:40:38.778033  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:38.778039  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:38.778098  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:38.819601  291455 cri.go:89] found id: ""
	I1212 01:40:38.819630  291455 logs.go:282] 0 containers: []
	W1212 01:40:38.819639  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:38.819649  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:38.819726  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:38.863492  291455 cri.go:89] found id: ""
	I1212 01:40:38.863555  291455 logs.go:282] 0 containers: []
	W1212 01:40:38.863567  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:38.863574  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:38.863640  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:38.896081  291455 cri.go:89] found id: ""
	I1212 01:40:38.896109  291455 logs.go:282] 0 containers: []
	W1212 01:40:38.896118  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:38.896124  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:38.896189  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:38.923782  291455 cri.go:89] found id: ""
	I1212 01:40:38.923824  291455 logs.go:282] 0 containers: []
	W1212 01:40:38.923832  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:38.923838  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:38.923896  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:38.948257  291455 cri.go:89] found id: ""
	I1212 01:40:38.948289  291455 logs.go:282] 0 containers: []
	W1212 01:40:38.948305  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:38.948312  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:38.948379  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:38.974066  291455 cri.go:89] found id: ""
	I1212 01:40:38.974090  291455 logs.go:282] 0 containers: []
	W1212 01:40:38.974098  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:38.974104  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:38.974163  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:38.999566  291455 cri.go:89] found id: ""
	I1212 01:40:38.999654  291455 logs.go:282] 0 containers: []
	W1212 01:40:38.999670  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:38.999681  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:38.999693  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:39.032809  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:39.032845  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:39.061204  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:39.061234  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:39.116485  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:39.116516  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:39.129984  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:39.130014  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:39.195391  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:39.187100   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:39.187857   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:39.189545   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:39.190069   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:39.191706   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:39.187100   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:39.187857   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:39.189545   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:39.190069   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:39.191706   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:41.695676  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:41.707011  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:41.707085  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:41.731224  291455 cri.go:89] found id: ""
	I1212 01:40:41.731295  291455 logs.go:282] 0 containers: []
	W1212 01:40:41.731318  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:41.731337  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:41.731422  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:41.759193  291455 cri.go:89] found id: ""
	I1212 01:40:41.759266  291455 logs.go:282] 0 containers: []
	W1212 01:40:41.759289  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:41.759308  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:41.759394  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:41.793923  291455 cri.go:89] found id: ""
	I1212 01:40:41.793994  291455 logs.go:282] 0 containers: []
	W1212 01:40:41.794017  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:41.794038  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:41.794121  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:41.844183  291455 cri.go:89] found id: ""
	I1212 01:40:41.844246  291455 logs.go:282] 0 containers: []
	W1212 01:40:41.844277  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:41.844297  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:41.844405  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:41.880181  291455 cri.go:89] found id: ""
	I1212 01:40:41.880253  291455 logs.go:282] 0 containers: []
	W1212 01:40:41.880288  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:41.880312  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:41.880412  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:41.908685  291455 cri.go:89] found id: ""
	I1212 01:40:41.908760  291455 logs.go:282] 0 containers: []
	W1212 01:40:41.908776  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:41.908783  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:41.908840  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:41.933232  291455 cri.go:89] found id: ""
	I1212 01:40:41.933257  291455 logs.go:282] 0 containers: []
	W1212 01:40:41.933265  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:41.933272  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:41.933361  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:41.957941  291455 cri.go:89] found id: ""
	I1212 01:40:41.957966  291455 logs.go:282] 0 containers: []
	W1212 01:40:41.957975  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:41.957993  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:41.958004  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:42.012839  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:42.012878  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:42.028378  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:42.028410  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:42.099435  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:42.089806   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:42.091059   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:42.092313   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:42.093469   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:42.094522   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:42.089806   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:42.091059   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:42.092313   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:42.093469   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:42.094522   11934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:42.099461  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:42.099477  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:42.127956  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:42.127997  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:44.666695  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:44.677340  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:44.677417  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:44.701562  291455 cri.go:89] found id: ""
	I1212 01:40:44.701585  291455 logs.go:282] 0 containers: []
	W1212 01:40:44.701594  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:44.701600  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:44.701657  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:44.726430  291455 cri.go:89] found id: ""
	I1212 01:40:44.726452  291455 logs.go:282] 0 containers: []
	W1212 01:40:44.726460  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:44.726466  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:44.726555  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:44.755275  291455 cri.go:89] found id: ""
	I1212 01:40:44.755298  291455 logs.go:282] 0 containers: []
	W1212 01:40:44.755306  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:44.755312  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:44.755367  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:44.780079  291455 cri.go:89] found id: ""
	I1212 01:40:44.780105  291455 logs.go:282] 0 containers: []
	W1212 01:40:44.780114  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:44.780120  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:44.780194  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:44.869405  291455 cri.go:89] found id: ""
	I1212 01:40:44.869429  291455 logs.go:282] 0 containers: []
	W1212 01:40:44.869437  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:44.869444  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:44.869510  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:44.895160  291455 cri.go:89] found id: ""
	I1212 01:40:44.895186  291455 logs.go:282] 0 containers: []
	W1212 01:40:44.895195  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:44.895201  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:44.895258  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:44.919698  291455 cri.go:89] found id: ""
	I1212 01:40:44.919721  291455 logs.go:282] 0 containers: []
	W1212 01:40:44.919730  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:44.919736  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:44.919792  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:44.944054  291455 cri.go:89] found id: ""
	I1212 01:40:44.944076  291455 logs.go:282] 0 containers: []
	W1212 01:40:44.944085  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:44.944093  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:44.944104  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:44.968670  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:44.968701  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:44.997722  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:44.997750  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:45.076118  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:45.076163  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:45.092613  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:45.092646  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:45.185594  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:45.175075   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:45.176253   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:45.177119   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:45.179849   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:45.180652   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:45.175075   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:45.176253   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:45.177119   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:45.179849   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:45.180652   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:47.686812  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:47.697462  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:47.697534  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:47.725301  291455 cri.go:89] found id: ""
	I1212 01:40:47.725327  291455 logs.go:282] 0 containers: []
	W1212 01:40:47.725336  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:47.725342  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:47.725406  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:47.750015  291455 cri.go:89] found id: ""
	I1212 01:40:47.750040  291455 logs.go:282] 0 containers: []
	W1212 01:40:47.750050  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:47.750057  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:47.750116  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:47.774576  291455 cri.go:89] found id: ""
	I1212 01:40:47.774604  291455 logs.go:282] 0 containers: []
	W1212 01:40:47.774613  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:47.774620  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:47.774679  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:47.823337  291455 cri.go:89] found id: ""
	I1212 01:40:47.823365  291455 logs.go:282] 0 containers: []
	W1212 01:40:47.823374  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:47.823381  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:47.823451  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:47.863754  291455 cri.go:89] found id: ""
	I1212 01:40:47.863776  291455 logs.go:282] 0 containers: []
	W1212 01:40:47.863785  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:47.863791  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:47.863851  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:47.892358  291455 cri.go:89] found id: ""
	I1212 01:40:47.892383  291455 logs.go:282] 0 containers: []
	W1212 01:40:47.892391  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:47.892398  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:47.892463  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:47.916778  291455 cri.go:89] found id: ""
	I1212 01:40:47.916805  291455 logs.go:282] 0 containers: []
	W1212 01:40:47.916815  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:47.916821  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:47.916900  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:47.942154  291455 cri.go:89] found id: ""
	I1212 01:40:47.942177  291455 logs.go:282] 0 containers: []
	W1212 01:40:47.942185  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:47.942194  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:47.942208  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:47.955644  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:47.955725  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:48.027299  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:48.016837   12155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:48.017641   12155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:48.019747   12155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:48.020636   12155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:48.022832   12155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:48.016837   12155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:48.017641   12155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:48.019747   12155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:48.020636   12155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:48.022832   12155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:48.027326  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:48.027340  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:48.052933  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:48.052970  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:48.089641  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:48.089674  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:50.649196  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:50.660069  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:50.660143  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:50.685271  291455 cri.go:89] found id: ""
	I1212 01:40:50.685299  291455 logs.go:282] 0 containers: []
	W1212 01:40:50.685309  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:50.685316  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:50.685378  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:50.712999  291455 cri.go:89] found id: ""
	I1212 01:40:50.713025  291455 logs.go:282] 0 containers: []
	W1212 01:40:50.713034  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:50.713040  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:50.713099  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:50.737720  291455 cri.go:89] found id: ""
	I1212 01:40:50.737745  291455 logs.go:282] 0 containers: []
	W1212 01:40:50.737754  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:50.737761  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:50.737828  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:50.763261  291455 cri.go:89] found id: ""
	I1212 01:40:50.763286  291455 logs.go:282] 0 containers: []
	W1212 01:40:50.763294  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:50.763300  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:50.763358  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:50.811665  291455 cri.go:89] found id: ""
	I1212 01:40:50.811692  291455 logs.go:282] 0 containers: []
	W1212 01:40:50.811701  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:50.811707  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:50.811768  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:50.870884  291455 cri.go:89] found id: ""
	I1212 01:40:50.870909  291455 logs.go:282] 0 containers: []
	W1212 01:40:50.870921  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:50.870927  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:50.870986  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:50.896362  291455 cri.go:89] found id: ""
	I1212 01:40:50.896387  291455 logs.go:282] 0 containers: []
	W1212 01:40:50.896395  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:50.896401  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:50.896457  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:50.924933  291455 cri.go:89] found id: ""
	I1212 01:40:50.924956  291455 logs.go:282] 0 containers: []
	W1212 01:40:50.924964  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:50.924974  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:50.924986  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:50.982505  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:50.982537  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:50.996444  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:50.996467  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:51.075810  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:51.067309   12269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:51.068132   12269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:51.069797   12269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:51.070277   12269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:51.071678   12269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:51.067309   12269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:51.068132   12269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:51.069797   12269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:51.070277   12269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:51.071678   12269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:51.075896  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:51.075929  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:51.100541  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:51.100577  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:53.629887  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:53.640204  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:53.640274  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:53.665408  291455 cri.go:89] found id: ""
	I1212 01:40:53.665487  291455 logs.go:282] 0 containers: []
	W1212 01:40:53.665511  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:53.665531  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:53.665616  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:53.693593  291455 cri.go:89] found id: ""
	I1212 01:40:53.693620  291455 logs.go:282] 0 containers: []
	W1212 01:40:53.693629  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:53.693635  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:53.693693  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:53.717209  291455 cri.go:89] found id: ""
	I1212 01:40:53.717234  291455 logs.go:282] 0 containers: []
	W1212 01:40:53.717243  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:53.717249  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:53.717305  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:53.742008  291455 cri.go:89] found id: ""
	I1212 01:40:53.742033  291455 logs.go:282] 0 containers: []
	W1212 01:40:53.742042  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:53.742049  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:53.742106  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:53.766463  291455 cri.go:89] found id: ""
	I1212 01:40:53.766489  291455 logs.go:282] 0 containers: []
	W1212 01:40:53.766498  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:53.766505  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:53.766562  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:53.832090  291455 cri.go:89] found id: ""
	I1212 01:40:53.832118  291455 logs.go:282] 0 containers: []
	W1212 01:40:53.832133  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:53.832140  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:53.832201  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:53.877395  291455 cri.go:89] found id: ""
	I1212 01:40:53.877422  291455 logs.go:282] 0 containers: []
	W1212 01:40:53.877431  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:53.877438  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:53.877497  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:53.905857  291455 cri.go:89] found id: ""
	I1212 01:40:53.905883  291455 logs.go:282] 0 containers: []
	W1212 01:40:53.905891  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:53.905900  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:53.905912  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:53.936211  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:53.936236  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:53.990768  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:53.990801  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:54.005707  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:54.005751  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:54.077323  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:54.068912   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:54.069627   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:54.071278   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:54.071804   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:54.073346   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:54.068912   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:54.069627   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:54.071278   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:54.071804   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:54.073346   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:54.077345  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:54.077361  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:56.603783  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:56.614362  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:56.614437  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:56.639205  291455 cri.go:89] found id: ""
	I1212 01:40:56.639230  291455 logs.go:282] 0 containers: []
	W1212 01:40:56.639239  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:56.639245  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:56.639302  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:56.664961  291455 cri.go:89] found id: ""
	I1212 01:40:56.664983  291455 logs.go:282] 0 containers: []
	W1212 01:40:56.664991  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:56.664997  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:56.665055  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:56.689125  291455 cri.go:89] found id: ""
	I1212 01:40:56.689148  291455 logs.go:282] 0 containers: []
	W1212 01:40:56.689163  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:56.689169  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:56.689228  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:56.713944  291455 cri.go:89] found id: ""
	I1212 01:40:56.713969  291455 logs.go:282] 0 containers: []
	W1212 01:40:56.713977  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:56.713984  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:56.714045  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:56.742503  291455 cri.go:89] found id: ""
	I1212 01:40:56.742536  291455 logs.go:282] 0 containers: []
	W1212 01:40:56.742546  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:56.742552  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:56.742610  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:56.768074  291455 cri.go:89] found id: ""
	I1212 01:40:56.768101  291455 logs.go:282] 0 containers: []
	W1212 01:40:56.768110  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:56.768116  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:56.768176  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:56.822219  291455 cri.go:89] found id: ""
	I1212 01:40:56.822241  291455 logs.go:282] 0 containers: []
	W1212 01:40:56.822250  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:56.822256  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:56.822326  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:56.877551  291455 cri.go:89] found id: ""
	I1212 01:40:56.877579  291455 logs.go:282] 0 containers: []
	W1212 01:40:56.877588  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:56.877598  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:56.877609  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:56.951400  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:56.942725   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:56.943403   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:56.945223   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:56.945864   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:56.947463   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:56.942725   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:56.943403   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:56.945223   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:56.945864   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:56.947463   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:56.951423  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:56.951435  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:40:56.976432  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:40:56.976471  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:40:57.016067  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:57.016095  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:57.076530  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:57.076562  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:59.590650  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:40:59.601442  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:40:59.601513  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:40:59.627392  291455 cri.go:89] found id: ""
	I1212 01:40:59.627418  291455 logs.go:282] 0 containers: []
	W1212 01:40:59.627426  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:40:59.627433  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:40:59.627492  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:40:59.652525  291455 cri.go:89] found id: ""
	I1212 01:40:59.652546  291455 logs.go:282] 0 containers: []
	W1212 01:40:59.652555  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:40:59.652560  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:40:59.652620  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:40:59.677515  291455 cri.go:89] found id: ""
	I1212 01:40:59.677538  291455 logs.go:282] 0 containers: []
	W1212 01:40:59.677546  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:40:59.677551  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:40:59.677609  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:40:59.701508  291455 cri.go:89] found id: ""
	I1212 01:40:59.701531  291455 logs.go:282] 0 containers: []
	W1212 01:40:59.701539  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:40:59.701545  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:40:59.701602  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:40:59.726132  291455 cri.go:89] found id: ""
	I1212 01:40:59.726154  291455 logs.go:282] 0 containers: []
	W1212 01:40:59.726162  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:40:59.726168  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:40:59.726228  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:40:59.751581  291455 cri.go:89] found id: ""
	I1212 01:40:59.751608  291455 logs.go:282] 0 containers: []
	W1212 01:40:59.751617  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:40:59.751625  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:40:59.751682  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:40:59.780780  291455 cri.go:89] found id: ""
	I1212 01:40:59.780805  291455 logs.go:282] 0 containers: []
	W1212 01:40:59.780825  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:40:59.780836  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:40:59.780901  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:40:59.866401  291455 cri.go:89] found id: ""
	I1212 01:40:59.866424  291455 logs.go:282] 0 containers: []
	W1212 01:40:59.866433  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:40:59.866442  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:40:59.866453  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:40:59.921825  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:40:59.921862  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:40:59.935338  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:40:59.935366  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:40:59.999474  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:40:59.992159   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:59.992558   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:59.993995   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:59.994293   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:59.995686   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:40:59.992159   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:59.992558   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:59.993995   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:59.994293   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:40:59.995686   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:40:59.999546  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:40:59.999574  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:41:00.079868  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:41:00.084769  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:41:02.719157  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:41:02.730262  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:41:02.730335  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:41:02.756172  291455 cri.go:89] found id: ""
	I1212 01:41:02.756196  291455 logs.go:282] 0 containers: []
	W1212 01:41:02.756206  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:41:02.756213  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:41:02.756272  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:41:02.792420  291455 cri.go:89] found id: ""
	I1212 01:41:02.792445  291455 logs.go:282] 0 containers: []
	W1212 01:41:02.792455  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:41:02.792461  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:41:02.792531  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:41:02.838813  291455 cri.go:89] found id: ""
	I1212 01:41:02.838841  291455 logs.go:282] 0 containers: []
	W1212 01:41:02.838849  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:41:02.838856  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:41:02.838918  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:41:02.886478  291455 cri.go:89] found id: ""
	I1212 01:41:02.886504  291455 logs.go:282] 0 containers: []
	W1212 01:41:02.886513  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:41:02.886523  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:41:02.886580  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:41:02.914286  291455 cri.go:89] found id: ""
	I1212 01:41:02.914309  291455 logs.go:282] 0 containers: []
	W1212 01:41:02.914318  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:41:02.914333  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:41:02.914403  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:41:02.939527  291455 cri.go:89] found id: ""
	I1212 01:41:02.939550  291455 logs.go:282] 0 containers: []
	W1212 01:41:02.939559  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:41:02.939565  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:41:02.939624  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:41:02.965321  291455 cri.go:89] found id: ""
	I1212 01:41:02.965345  291455 logs.go:282] 0 containers: []
	W1212 01:41:02.965354  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:41:02.965360  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:41:02.965423  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:41:02.991292  291455 cri.go:89] found id: ""
	I1212 01:41:02.991316  291455 logs.go:282] 0 containers: []
	W1212 01:41:02.991325  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:41:02.991341  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:41:02.991352  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:41:03.019527  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:41:03.019562  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:41:03.051852  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:41:03.051878  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:41:03.107633  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:41:03.107667  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:41:03.121349  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:41:03.121375  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:41:03.186261  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:41:03.177889   12737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:03.178763   12737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:03.180270   12737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:03.180822   12737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:03.182351   12737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:41:03.177889   12737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:03.178763   12737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:03.180270   12737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:03.180822   12737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:03.182351   12737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:41:05.687947  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:41:05.698808  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:41:05.698883  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:41:05.724019  291455 cri.go:89] found id: ""
	I1212 01:41:05.724043  291455 logs.go:282] 0 containers: []
	W1212 01:41:05.724052  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:41:05.724058  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:41:05.724115  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:41:05.752813  291455 cri.go:89] found id: ""
	I1212 01:41:05.752838  291455 logs.go:282] 0 containers: []
	W1212 01:41:05.752847  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:41:05.752853  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:41:05.752917  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:41:05.777122  291455 cri.go:89] found id: ""
	I1212 01:41:05.777144  291455 logs.go:282] 0 containers: []
	W1212 01:41:05.777152  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:41:05.777158  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:41:05.777215  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:41:05.833235  291455 cri.go:89] found id: ""
	I1212 01:41:05.833260  291455 logs.go:282] 0 containers: []
	W1212 01:41:05.833270  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:41:05.833276  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:41:05.833350  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:41:05.880483  291455 cri.go:89] found id: ""
	I1212 01:41:05.880506  291455 logs.go:282] 0 containers: []
	W1212 01:41:05.880514  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:41:05.880520  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:41:05.880583  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:41:05.904810  291455 cri.go:89] found id: ""
	I1212 01:41:05.904834  291455 logs.go:282] 0 containers: []
	W1212 01:41:05.904843  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:41:05.904849  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:41:05.904906  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:41:05.936458  291455 cri.go:89] found id: ""
	I1212 01:41:05.936482  291455 logs.go:282] 0 containers: []
	W1212 01:41:05.936491  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:41:05.936497  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:41:05.936585  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:41:05.965168  291455 cri.go:89] found id: ""
	I1212 01:41:05.965193  291455 logs.go:282] 0 containers: []
	W1212 01:41:05.965202  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:41:05.965212  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:41:05.965225  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:41:06.022621  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:41:06.022674  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:41:06.036897  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:41:06.036926  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:41:06.105481  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:41:06.097089   12840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:06.097938   12840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:06.099584   12840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:06.099907   12840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:06.101467   12840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:41:06.097089   12840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:06.097938   12840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:06.099584   12840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:06.099907   12840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:06.101467   12840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:41:06.105505  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:41:06.105518  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:41:06.131153  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:41:06.131186  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:41:08.659864  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:41:08.670811  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:41:08.670881  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:41:08.694882  291455 cri.go:89] found id: ""
	I1212 01:41:08.694903  291455 logs.go:282] 0 containers: []
	W1212 01:41:08.694911  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:41:08.694917  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:41:08.694976  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:41:08.719560  291455 cri.go:89] found id: ""
	I1212 01:41:08.719590  291455 logs.go:282] 0 containers: []
	W1212 01:41:08.719598  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:41:08.719605  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:41:08.719662  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:41:08.744076  291455 cri.go:89] found id: ""
	I1212 01:41:08.744103  291455 logs.go:282] 0 containers: []
	W1212 01:41:08.744113  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:41:08.744119  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:41:08.744177  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:41:08.772960  291455 cri.go:89] found id: ""
	I1212 01:41:08.772985  291455 logs.go:282] 0 containers: []
	W1212 01:41:08.772994  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:41:08.773001  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:41:08.773080  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:41:08.815633  291455 cri.go:89] found id: ""
	I1212 01:41:08.815659  291455 logs.go:282] 0 containers: []
	W1212 01:41:08.815668  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:41:08.815674  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:41:08.815742  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:41:08.878320  291455 cri.go:89] found id: ""
	I1212 01:41:08.878345  291455 logs.go:282] 0 containers: []
	W1212 01:41:08.878353  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:41:08.878360  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:41:08.878450  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:41:08.904601  291455 cri.go:89] found id: ""
	I1212 01:41:08.904628  291455 logs.go:282] 0 containers: []
	W1212 01:41:08.904636  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:41:08.904643  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:41:08.904702  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:41:08.929638  291455 cri.go:89] found id: ""
	I1212 01:41:08.929660  291455 logs.go:282] 0 containers: []
	W1212 01:41:08.929668  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:41:08.929678  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:41:08.929689  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:41:08.987700  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:41:08.987732  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:41:09.006748  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:41:09.006844  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:41:09.074571  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:41:09.066680   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:09.067299   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:09.068802   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:09.069203   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:09.070675   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:41:09.066680   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:09.067299   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:09.068802   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:09.069203   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:09.070675   12955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:41:09.074595  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:41:09.074607  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:41:09.099568  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:41:09.099599  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:41:11.629539  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:41:11.640012  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:41:11.640082  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:41:11.663460  291455 cri.go:89] found id: ""
	I1212 01:41:11.663485  291455 logs.go:282] 0 containers: []
	W1212 01:41:11.663493  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:41:11.663500  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:41:11.663555  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:41:11.686956  291455 cri.go:89] found id: ""
	I1212 01:41:11.686978  291455 logs.go:282] 0 containers: []
	W1212 01:41:11.686986  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:41:11.687088  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:41:11.687150  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:41:11.712890  291455 cri.go:89] found id: ""
	I1212 01:41:11.712913  291455 logs.go:282] 0 containers: []
	W1212 01:41:11.712922  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:41:11.712928  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:41:11.712984  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:41:11.736706  291455 cri.go:89] found id: ""
	I1212 01:41:11.736728  291455 logs.go:282] 0 containers: []
	W1212 01:41:11.736736  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:41:11.736742  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:41:11.736800  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:41:11.759893  291455 cri.go:89] found id: ""
	I1212 01:41:11.759915  291455 logs.go:282] 0 containers: []
	W1212 01:41:11.759923  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:41:11.759929  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:41:11.759986  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:41:11.794524  291455 cri.go:89] found id: ""
	I1212 01:41:11.794548  291455 logs.go:282] 0 containers: []
	W1212 01:41:11.794556  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:41:11.794563  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:41:11.794617  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:41:11.837664  291455 cri.go:89] found id: ""
	I1212 01:41:11.837685  291455 logs.go:282] 0 containers: []
	W1212 01:41:11.837693  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:41:11.837699  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:41:11.837758  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:41:11.876539  291455 cri.go:89] found id: ""
	I1212 01:41:11.876560  291455 logs.go:282] 0 containers: []
	W1212 01:41:11.876568  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:41:11.876576  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:41:11.876588  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:41:11.891935  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:41:11.891958  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:41:11.953883  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:41:11.945499   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:11.946165   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:11.947829   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:11.948378   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:11.949885   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:41:11.945499   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:11.946165   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:11.947829   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:11.948378   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:11.949885   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:41:11.953906  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:41:11.953919  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:41:11.978361  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:41:11.978394  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:41:12.008436  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:41:12.008467  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:41:14.566794  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:41:14.577540  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:41:14.577620  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:41:14.603419  291455 cri.go:89] found id: ""
	I1212 01:41:14.603444  291455 logs.go:282] 0 containers: []
	W1212 01:41:14.603453  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:41:14.603459  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:41:14.603523  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:41:14.627963  291455 cri.go:89] found id: ""
	I1212 01:41:14.627986  291455 logs.go:282] 0 containers: []
	W1212 01:41:14.627994  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:41:14.628000  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:41:14.628064  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:41:14.651989  291455 cri.go:89] found id: ""
	I1212 01:41:14.652014  291455 logs.go:282] 0 containers: []
	W1212 01:41:14.652024  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:41:14.652031  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:41:14.652089  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:41:14.680771  291455 cri.go:89] found id: ""
	I1212 01:41:14.680794  291455 logs.go:282] 0 containers: []
	W1212 01:41:14.680802  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:41:14.680808  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:41:14.680865  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:41:14.705454  291455 cri.go:89] found id: ""
	I1212 01:41:14.705479  291455 logs.go:282] 0 containers: []
	W1212 01:41:14.705488  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:41:14.705494  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:41:14.705553  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:41:14.734181  291455 cri.go:89] found id: ""
	I1212 01:41:14.734207  291455 logs.go:282] 0 containers: []
	W1212 01:41:14.734216  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:41:14.734222  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:41:14.734279  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:41:14.758125  291455 cri.go:89] found id: ""
	I1212 01:41:14.758150  291455 logs.go:282] 0 containers: []
	W1212 01:41:14.758159  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:41:14.758165  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:41:14.758224  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:41:14.796212  291455 cri.go:89] found id: ""
	I1212 01:41:14.796239  291455 logs.go:282] 0 containers: []
	W1212 01:41:14.796248  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:41:14.796257  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:41:14.796268  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:41:14.875942  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:41:14.875982  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:41:14.893694  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:41:14.893723  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:41:14.958664  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:41:14.950439   13182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:14.951146   13182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:14.952867   13182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:14.953336   13182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:14.954860   13182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:41:14.950439   13182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:14.951146   13182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:14.952867   13182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:14.953336   13182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:14.954860   13182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:41:14.958686  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:41:14.958698  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:41:14.983555  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:41:14.983592  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:41:17.522313  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:41:17.532817  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:41:17.532892  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:41:17.560757  291455 cri.go:89] found id: ""
	I1212 01:41:17.560779  291455 logs.go:282] 0 containers: []
	W1212 01:41:17.560788  291455 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:41:17.560795  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1212 01:41:17.560851  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:41:17.585702  291455 cri.go:89] found id: ""
	I1212 01:41:17.585725  291455 logs.go:282] 0 containers: []
	W1212 01:41:17.585734  291455 logs.go:284] No container was found matching "etcd"
	I1212 01:41:17.585740  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1212 01:41:17.585807  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:41:17.614888  291455 cri.go:89] found id: ""
	I1212 01:41:17.614912  291455 logs.go:282] 0 containers: []
	W1212 01:41:17.614920  291455 logs.go:284] No container was found matching "coredns"
	I1212 01:41:17.614926  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:41:17.614983  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:41:17.640684  291455 cri.go:89] found id: ""
	I1212 01:41:17.640706  291455 logs.go:282] 0 containers: []
	W1212 01:41:17.640714  291455 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:41:17.640721  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:41:17.640781  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:41:17.666504  291455 cri.go:89] found id: ""
	I1212 01:41:17.666529  291455 logs.go:282] 0 containers: []
	W1212 01:41:17.666538  291455 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:41:17.666545  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:41:17.666619  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:41:17.693636  291455 cri.go:89] found id: ""
	I1212 01:41:17.693661  291455 logs.go:282] 0 containers: []
	W1212 01:41:17.693670  291455 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:41:17.693677  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1212 01:41:17.693738  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:41:17.718203  291455 cri.go:89] found id: ""
	I1212 01:41:17.718270  291455 logs.go:282] 0 containers: []
	W1212 01:41:17.718310  291455 logs.go:284] No container was found matching "kindnet"
	I1212 01:41:17.718337  291455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:41:17.718430  291455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:41:17.745520  291455 cri.go:89] found id: ""
	I1212 01:41:17.745544  291455 logs.go:282] 0 containers: []
	W1212 01:41:17.745553  291455 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:41:17.745562  291455 logs.go:123] Gathering logs for kubelet ...
	I1212 01:41:17.745574  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:41:17.809137  291455 logs.go:123] Gathering logs for dmesg ...
	I1212 01:41:17.809237  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:41:17.824842  291455 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:41:17.824909  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:41:17.914329  291455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:41:17.905491   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:17.906027   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:17.907410   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:17.907912   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:17.909473   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 01:41:17.905491   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:17.906027   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:17.907410   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:17.907912   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:17.909473   13293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:41:17.914350  291455 logs.go:123] Gathering logs for containerd ...
	I1212 01:41:17.914365  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1212 01:41:17.939510  291455 logs.go:123] Gathering logs for container status ...
	I1212 01:41:17.939546  291455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:41:20.466980  291455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:41:20.480747  291455 out.go:203] 
	W1212 01:41:20.483558  291455 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1212 01:41:20.483596  291455 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1212 01:41:20.483610  291455 out.go:285] * Related issues:
	W1212 01:41:20.483628  291455 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1212 01:41:20.483644  291455 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1212 01:41:20.486471  291455 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.245221319Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.245292023Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.245392938Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.245465406Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.245525534Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.245588713Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.245646150Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.245704997Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.245771016Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.245854791Z" level=info msg="Connect containerd service"
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.246200073Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.246847141Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.263118416Z" level=info msg="Start subscribing containerd event"
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.263340210Z" level=info msg="Start recovering state"
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.263271673Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.264204469Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.302901466Z" level=info msg="Start event monitor"
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.302971940Z" level=info msg="Start cni network conf syncer for default"
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.302983534Z" level=info msg="Start streaming server"
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.303030213Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.303039617Z" level=info msg="runtime interface starting up..."
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.303045705Z" level=info msg="starting plugins..."
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.303228803Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 01:35:17 newest-cni-256959 containerd[555]: time="2025-12-12T01:35:17.303400291Z" level=info msg="containerd successfully booted in 0.083333s"
	Dec 12 01:35:17 newest-cni-256959 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:41:33.903729   13959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:33.904534   13959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:33.906110   13959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:33.906541   13959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:41:33.908143   13959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec11 23:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014465] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.504479] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.038126] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.726220] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.947343] kauditd_printk_skb: 36 callbacks suppressed
	[Dec12 00:40] hrtimer: interrupt took 11339963 ns
	
	
	==> kernel <==
	 01:41:33 up  2:23,  0 user,  load average: 0.69, 0.67, 1.26
	Linux newest-cni-256959 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 01:41:30 newest-cni-256959 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:41:31 newest-cni-256959 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5.
	Dec 12 01:41:31 newest-cni-256959 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:41:31 newest-cni-256959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:41:31 newest-cni-256959 kubelet[13820]: E1212 01:41:31.409037   13820 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:41:31 newest-cni-256959 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:41:31 newest-cni-256959 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:41:32 newest-cni-256959 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6.
	Dec 12 01:41:32 newest-cni-256959 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:41:32 newest-cni-256959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:41:32 newest-cni-256959 kubelet[13841]: E1212 01:41:32.152589   13841 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:41:32 newest-cni-256959 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:41:32 newest-cni-256959 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:41:32 newest-cni-256959 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7.
	Dec 12 01:41:32 newest-cni-256959 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:41:32 newest-cni-256959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:41:32 newest-cni-256959 kubelet[13862]: E1212 01:41:32.906613   13862 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:41:32 newest-cni-256959 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:41:32 newest-cni-256959 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:41:33 newest-cni-256959 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8.
	Dec 12 01:41:33 newest-cni-256959 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:41:33 newest-cni-256959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:41:33 newest-cni-256959 kubelet[13880]: E1212 01:41:33.656965   13880 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:41:33 newest-cni-256959 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:41:33 newest-cni-256959 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-256959 -n newest-cni-256959
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-256959 -n newest-cni-256959: exit status 2 (332.702166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-256959" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (9.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (269.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 01:48:39.153941    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/auto-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 01:48:41.615081    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 01:48:52.051842    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 01:49:20.115955    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/auto-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 01:49:47.046464    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/kindnet-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:49:47.052755    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/kindnet-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:49:47.064054    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/kindnet-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:49:47.085338    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/kindnet-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:49:47.126666    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/kindnet-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:49:47.208371    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/kindnet-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:49:47.369869    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/kindnet-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:49:47.691799    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/kindnet-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 01:49:49.615675    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/kindnet-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 01:49:52.177110    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/kindnet-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 01:49:57.299117    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/kindnet-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 01:50:07.540439    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/kindnet-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 01:50:28.022372    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/kindnet-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 01:50:40.124452    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 01:50:42.037478    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/auto-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 01:50:50.110916    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/old-k8s-version-147581/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 01:50:57.042862    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 01:51:08.983957    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/kindnet-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
I1212 01:51:15.056318    4290 config.go:182] Loaded profile config "flannel-341847": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 01:51:22.588579    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/calico-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:51:22.595110    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/calico-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:51:22.606571    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/calico-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:51:22.628042    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/calico-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:51:22.669484    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/calico-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:51:22.750935    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/calico-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:51:22.912476    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/calico-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 01:51:23.233954    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/calico-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:51:23.875852    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/calico-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 01:51:25.157228    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/calico-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 01:51:27.718626    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/calico-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 01:51:32.840761    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/calico-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 01:51:43.082065    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/calico-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 01:52:03.564176    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/calico-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 01:52:30.905583    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/kindnet-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1212 01:52:44.525950    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/calico-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-361053 -n no-preload-361053
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-361053 -n no-preload-361053: exit status 2 (337.229448ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "no-preload-361053" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-361053 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context no-preload-361053 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.616µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-361053 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-361053
helpers_test.go:244: (dbg) docker inspect no-preload-361053:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd",
	        "Created": "2025-12-12T01:22:53.604240637Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 287337,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T01:33:10.69835803Z",
	            "FinishedAt": "2025-12-12T01:33:09.357122497Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd/hostname",
	        "HostsPath": "/var/lib/docker/containers/68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd/hosts",
	        "LogPath": "/var/lib/docker/containers/68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd/68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd-json.log",
	        "Name": "/no-preload-361053",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-361053:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-361053",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "68256fe8de3b1095c08410a1944bd38bf839bd39422b8ff089b376dc48f33bfd",
	                "LowerDir": "/var/lib/docker/overlay2/9169bb0f18d4483461fe8404c60d5cb61a51170a9dcf8e503c392f8c9d01abee-init/diff:/var/lib/docker/overlay2/4c0e5370e4fd7b4e6c6a79620ef377d7d55826709cd277e0cfa49c6005af0314/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9169bb0f18d4483461fe8404c60d5cb61a51170a9dcf8e503c392f8c9d01abee/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9169bb0f18d4483461fe8404c60d5cb61a51170a9dcf8e503c392f8c9d01abee/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9169bb0f18d4483461fe8404c60d5cb61a51170a9dcf8e503c392f8c9d01abee/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-361053",
	                "Source": "/var/lib/docker/volumes/no-preload-361053/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-361053",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-361053",
	                "name.minikube.sigs.k8s.io": "no-preload-361053",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "61cc494dd067263f866e7781df4148bb8c831ce7801f7a97e8775eb48f40b482",
	            "SandboxKey": "/var/run/docker/netns/61cc494dd067",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-361053": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0a:bb:a3:34:c6:7e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ee086efedb5c3900c251cd31f9316499408470e70a7d486e64d8b91c6bf60cd7",
	                    "EndpointID": "f480dff36972a9a192fc5dc57b92877bed5645512d8423e9e85ac35e1acb41cd",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-361053",
	                        "68256fe8de3b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-361053 -n no-preload-361053
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-361053 -n no-preload-361053: exit status 2 (309.895109ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-361053 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                     ARGS                                                                     │    PROFILE     │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p flannel-341847 sudo systemctl status kubelet --all --full --no-pager                                                                      │ flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:51 UTC │ 12 Dec 25 01:51 UTC │
	│ ssh     │ -p flannel-341847 sudo systemctl cat kubelet --no-pager                                                                                      │ flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:51 UTC │ 12 Dec 25 01:51 UTC │
	│ ssh     │ -p flannel-341847 sudo journalctl -xeu kubelet --all --full --no-pager                                                                       │ flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:51 UTC │ 12 Dec 25 01:51 UTC │
	│ ssh     │ -p flannel-341847 sudo cat /etc/kubernetes/kubelet.conf                                                                                      │ flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:51 UTC │ 12 Dec 25 01:51 UTC │
	│ ssh     │ -p flannel-341847 sudo cat /var/lib/kubelet/config.yaml                                                                                      │ flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:51 UTC │ 12 Dec 25 01:51 UTC │
	│ ssh     │ -p flannel-341847 sudo systemctl status docker --all --full --no-pager                                                                       │ flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:51 UTC │                     │
	│ ssh     │ -p flannel-341847 sudo systemctl cat docker --no-pager                                                                                       │ flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:51 UTC │ 12 Dec 25 01:51 UTC │
	│ ssh     │ -p flannel-341847 sudo cat /etc/docker/daemon.json                                                                                           │ flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:51 UTC │                     │
	│ ssh     │ -p flannel-341847 sudo docker system info                                                                                                    │ flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:51 UTC │                     │
	│ ssh     │ -p flannel-341847 sudo systemctl status cri-docker --all --full --no-pager                                                                   │ flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:51 UTC │                     │
	│ ssh     │ -p flannel-341847 sudo systemctl cat cri-docker --no-pager                                                                                   │ flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:51 UTC │ 12 Dec 25 01:51 UTC │
	│ ssh     │ -p flannel-341847 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                              │ flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:51 UTC │                     │
	│ ssh     │ -p flannel-341847 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                        │ flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:51 UTC │ 12 Dec 25 01:51 UTC │
	│ ssh     │ -p flannel-341847 sudo cri-dockerd --version                                                                                                 │ flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:51 UTC │ 12 Dec 25 01:51 UTC │
	│ ssh     │ -p flannel-341847 sudo systemctl status containerd --all --full --no-pager                                                                   │ flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:51 UTC │ 12 Dec 25 01:51 UTC │
	│ ssh     │ -p flannel-341847 sudo systemctl cat containerd --no-pager                                                                                   │ flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:51 UTC │ 12 Dec 25 01:51 UTC │
	│ ssh     │ -p flannel-341847 sudo cat /lib/systemd/system/containerd.service                                                                            │ flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:51 UTC │ 12 Dec 25 01:51 UTC │
	│ ssh     │ -p flannel-341847 sudo cat /etc/containerd/config.toml                                                                                       │ flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:51 UTC │ 12 Dec 25 01:51 UTC │
	│ ssh     │ -p flannel-341847 sudo containerd config dump                                                                                                │ flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:51 UTC │ 12 Dec 25 01:51 UTC │
	│ ssh     │ -p flannel-341847 sudo systemctl status crio --all --full --no-pager                                                                         │ flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:51 UTC │                     │
	│ ssh     │ -p flannel-341847 sudo systemctl cat crio --no-pager                                                                                         │ flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:51 UTC │ 12 Dec 25 01:51 UTC │
	│ ssh     │ -p flannel-341847 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                               │ flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:51 UTC │ 12 Dec 25 01:51 UTC │
	│ ssh     │ -p flannel-341847 sudo crio config                                                                                                           │ flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:51 UTC │ 12 Dec 25 01:51 UTC │
	│ delete  │ -p flannel-341847                                                                                                                            │ flannel-341847 │ jenkins │ v1.37.0 │ 12 Dec 25 01:51 UTC │ 12 Dec 25 01:51 UTC │
	│ start   │ -p bridge-341847 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd │ bridge-341847  │ jenkins │ v1.37.0 │ 12 Dec 25 01:51 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 01:51:46
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 01:51:46.493240  352746 out.go:360] Setting OutFile to fd 1 ...
	I1212 01:51:46.493388  352746 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:51:46.493400  352746 out.go:374] Setting ErrFile to fd 2...
	I1212 01:51:46.493406  352746 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:51:46.493671  352746 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	I1212 01:51:46.494077  352746 out.go:368] Setting JSON to false
	I1212 01:51:46.494921  352746 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":9253,"bootTime":1765495054,"procs":165,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1212 01:51:46.495022  352746 start.go:143] virtualization:  
	I1212 01:51:46.498515  352746 out.go:179] * [bridge-341847] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 01:51:46.502816  352746 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 01:51:46.502884  352746 notify.go:221] Checking for updates...
	I1212 01:51:46.509292  352746 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 01:51:46.512508  352746 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 01:51:46.515579  352746 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	I1212 01:51:46.518651  352746 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 01:51:46.521572  352746 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 01:51:46.525045  352746 config.go:182] Loaded profile config "no-preload-361053": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 01:51:46.525173  352746 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 01:51:46.548092  352746 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 01:51:46.548244  352746 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:51:46.610724  352746 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 01:51:46.600731601 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:51:46.610844  352746 docker.go:319] overlay module found
	I1212 01:51:46.614113  352746 out.go:179] * Using the docker driver based on user configuration
	I1212 01:51:46.616980  352746 start.go:309] selected driver: docker
	I1212 01:51:46.616995  352746 start.go:927] validating driver "docker" against <nil>
	I1212 01:51:46.617021  352746 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 01:51:46.617729  352746 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:51:46.679290  352746 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 01:51:46.670177287 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:51:46.679459  352746 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 01:51:46.679673  352746 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 01:51:46.682532  352746 out.go:179] * Using Docker driver with root privileges
	I1212 01:51:46.685456  352746 cni.go:84] Creating CNI manager for "bridge"
	I1212 01:51:46.685486  352746 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1212 01:51:46.685577  352746 start.go:353] cluster config:
	{Name:bridge-341847 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:bridge-341847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:51:46.690589  352746 out.go:179] * Starting "bridge-341847" primary control-plane node in "bridge-341847" cluster
	I1212 01:51:46.693422  352746 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1212 01:51:46.696368  352746 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1212 01:51:46.699142  352746 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1212 01:51:46.699187  352746 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1212 01:51:46.699197  352746 cache.go:65] Caching tarball of preloaded images
	I1212 01:51:46.699231  352746 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1212 01:51:46.699313  352746 preload.go:238] Found /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1212 01:51:46.699325  352746 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1212 01:51:46.699428  352746 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/bridge-341847/config.json ...
	I1212 01:51:46.699444  352746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/bridge-341847/config.json: {Name:mkd7c70a37f7cf9356a9b5f3c8fe4cc6f928561f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:51:46.718127  352746 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1212 01:51:46.718150  352746 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1212 01:51:46.718169  352746 cache.go:243] Successfully downloaded all kic artifacts
	I1212 01:51:46.718198  352746 start.go:360] acquireMachinesLock for bridge-341847: {Name:mkc83d41045edb23e2457e06cbb8e9d89b4b5bae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:51:46.718309  352746 start.go:364] duration metric: took 90.29µs to acquireMachinesLock for "bridge-341847"
	I1212 01:51:46.718338  352746 start.go:93] Provisioning new machine with config: &{Name:bridge-341847 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:bridge-341847 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1212 01:51:46.718415  352746 start.go:125] createHost starting for "" (driver="docker")
	I1212 01:51:46.721841  352746 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1212 01:51:46.722053  352746 start.go:159] libmachine.API.Create for "bridge-341847" (driver="docker")
	I1212 01:51:46.722097  352746 client.go:173] LocalClient.Create starting
	I1212 01:51:46.722165  352746 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem
	I1212 01:51:46.722202  352746 main.go:143] libmachine: Decoding PEM data...
	I1212 01:51:46.722218  352746 main.go:143] libmachine: Parsing certificate...
	I1212 01:51:46.722271  352746 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem
	I1212 01:51:46.722288  352746 main.go:143] libmachine: Decoding PEM data...
	I1212 01:51:46.722298  352746 main.go:143] libmachine: Parsing certificate...
	I1212 01:51:46.722659  352746 cli_runner.go:164] Run: docker network inspect bridge-341847 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 01:51:46.739171  352746 cli_runner.go:211] docker network inspect bridge-341847 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 01:51:46.739273  352746 network_create.go:284] running [docker network inspect bridge-341847] to gather additional debugging logs...
	I1212 01:51:46.739294  352746 cli_runner.go:164] Run: docker network inspect bridge-341847
	W1212 01:51:46.755289  352746 cli_runner.go:211] docker network inspect bridge-341847 returned with exit code 1
	I1212 01:51:46.755320  352746 network_create.go:287] error running [docker network inspect bridge-341847]: docker network inspect bridge-341847: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network bridge-341847 not found
	I1212 01:51:46.755335  352746 network_create.go:289] output of [docker network inspect bridge-341847]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network bridge-341847 not found
	
	** /stderr **
	I1212 01:51:46.755439  352746 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 01:51:46.771625  352746 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4cd687b06342 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:a2:e8:c8:87:d3:0a} reservation:<nil>}
	I1212 01:51:46.772000  352746 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-c02c16721c9d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3e:e7:06:63:2c:e9} reservation:<nil>}
	I1212 01:51:46.772372  352746 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-805b07ff58c0 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:be:18:35:7a:03:02} reservation:<nil>}
	I1212 01:51:46.772779  352746 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019b5760}
	I1212 01:51:46.772807  352746 network_create.go:124] attempt to create docker network bridge-341847 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1212 01:51:46.772861  352746 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-341847 bridge-341847
	I1212 01:51:46.836455  352746 network_create.go:108] docker network bridge-341847 192.168.76.0/24 created
	I1212 01:51:46.836492  352746 kic.go:121] calculated static IP "192.168.76.2" for the "bridge-341847" container
	I1212 01:51:46.836596  352746 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 01:51:46.853280  352746 cli_runner.go:164] Run: docker volume create bridge-341847 --label name.minikube.sigs.k8s.io=bridge-341847 --label created_by.minikube.sigs.k8s.io=true
	I1212 01:51:46.871429  352746 oci.go:103] Successfully created a docker volume bridge-341847
	I1212 01:51:46.871517  352746 cli_runner.go:164] Run: docker run --rm --name bridge-341847-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-341847 --entrypoint /usr/bin/test -v bridge-341847:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1212 01:51:47.366006  352746 oci.go:107] Successfully prepared a docker volume bridge-341847
	I1212 01:51:47.366089  352746 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1212 01:51:47.366106  352746 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 01:51:47.366182  352746 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v bridge-341847:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 01:51:51.447047  352746 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v bridge-341847:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.080823409s)
	I1212 01:51:51.447079  352746 kic.go:203] duration metric: took 4.080969954s to extract preloaded images to volume ...
	W1212 01:51:51.447219  352746 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1212 01:51:51.447336  352746 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 01:51:51.508493  352746 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname bridge-341847 --name bridge-341847 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-341847 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=bridge-341847 --network bridge-341847 --ip 192.168.76.2 --volume bridge-341847:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1212 01:51:51.821081  352746 cli_runner.go:164] Run: docker container inspect bridge-341847 --format={{.State.Running}}
	I1212 01:51:51.844949  352746 cli_runner.go:164] Run: docker container inspect bridge-341847 --format={{.State.Status}}
	I1212 01:51:51.864499  352746 cli_runner.go:164] Run: docker exec bridge-341847 stat /var/lib/dpkg/alternatives/iptables
	I1212 01:51:51.929643  352746 oci.go:144] the created container "bridge-341847" has a running status.
	I1212 01:51:51.929671  352746 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/bridge-341847/id_rsa...
	I1212 01:51:52.050319  352746 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22101-2343/.minikube/machines/bridge-341847/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 01:51:52.071498  352746 cli_runner.go:164] Run: docker container inspect bridge-341847 --format={{.State.Status}}
	I1212 01:51:52.094402  352746 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 01:51:52.094425  352746 kic_runner.go:114] Args: [docker exec --privileged bridge-341847 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 01:51:52.147686  352746 cli_runner.go:164] Run: docker container inspect bridge-341847 --format={{.State.Status}}
	I1212 01:51:52.176238  352746 machine.go:94] provisionDockerMachine start ...
	I1212 01:51:52.177790  352746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-341847
	I1212 01:51:52.205965  352746 main.go:143] libmachine: Using SSH client type: native
	I1212 01:51:52.206306  352746 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1212 01:51:52.206315  352746 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 01:51:52.207119  352746 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42278->127.0.0.1:33138: read: connection reset by peer
	I1212 01:51:55.358688  352746 main.go:143] libmachine: SSH cmd err, output: <nil>: bridge-341847
	
	I1212 01:51:55.358717  352746 ubuntu.go:182] provisioning hostname "bridge-341847"
	I1212 01:51:55.358789  352746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-341847
	I1212 01:51:55.376507  352746 main.go:143] libmachine: Using SSH client type: native
	I1212 01:51:55.376812  352746 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1212 01:51:55.376829  352746 main.go:143] libmachine: About to run SSH command:
	sudo hostname bridge-341847 && echo "bridge-341847" | sudo tee /etc/hostname
	I1212 01:51:55.536191  352746 main.go:143] libmachine: SSH cmd err, output: <nil>: bridge-341847
	
	I1212 01:51:55.536287  352746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-341847
	I1212 01:51:55.554240  352746 main.go:143] libmachine: Using SSH client type: native
	I1212 01:51:55.554563  352746 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1212 01:51:55.554584  352746 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-341847' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-341847/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-341847' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:51:55.703102  352746 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:51:55.703127  352746 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-2343/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-2343/.minikube}
	I1212 01:51:55.703177  352746 ubuntu.go:190] setting up certificates
	I1212 01:51:55.703198  352746 provision.go:84] configureAuth start
	I1212 01:51:55.703264  352746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-341847
	I1212 01:51:55.720090  352746 provision.go:143] copyHostCerts
	I1212 01:51:55.720170  352746 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem, removing ...
	I1212 01:51:55.720186  352746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem
	I1212 01:51:55.720263  352746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem (1082 bytes)
	I1212 01:51:55.720363  352746 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem, removing ...
	I1212 01:51:55.720372  352746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem
	I1212 01:51:55.720400  352746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem (1123 bytes)
	I1212 01:51:55.720467  352746 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem, removing ...
	I1212 01:51:55.720477  352746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem
	I1212 01:51:55.720503  352746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem (1675 bytes)
	I1212 01:51:55.720562  352746 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem org=jenkins.bridge-341847 san=[127.0.0.1 192.168.76.2 bridge-341847 localhost minikube]
	I1212 01:51:56.240364  352746 provision.go:177] copyRemoteCerts
	I1212 01:51:56.240433  352746 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:51:56.240477  352746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-341847
	I1212 01:51:56.259049  352746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/bridge-341847/id_rsa Username:docker}
	I1212 01:51:56.362536  352746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 01:51:56.379640  352746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1212 01:51:56.396787  352746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 01:51:56.413540  352746 provision.go:87] duration metric: took 710.324033ms to configureAuth
	I1212 01:51:56.413609  352746 ubuntu.go:206] setting minikube options for container-runtime
	I1212 01:51:56.413831  352746 config.go:182] Loaded profile config "bridge-341847": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1212 01:51:56.413846  352746 machine.go:97] duration metric: took 4.237589029s to provisionDockerMachine
	I1212 01:51:56.413854  352746 client.go:176] duration metric: took 9.691747472s to LocalClient.Create
	I1212 01:51:56.413875  352746 start.go:167] duration metric: took 9.691823059s to libmachine.API.Create "bridge-341847"
	I1212 01:51:56.413885  352746 start.go:293] postStartSetup for "bridge-341847" (driver="docker")
	I1212 01:51:56.413895  352746 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:51:56.413953  352746 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:51:56.414009  352746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-341847
	I1212 01:51:56.435995  352746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/bridge-341847/id_rsa Username:docker}
	I1212 01:51:56.538979  352746 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:51:56.542416  352746 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 01:51:56.542444  352746 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 01:51:56.542457  352746 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/addons for local assets ...
	I1212 01:51:56.542531  352746 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/files for local assets ...
	I1212 01:51:56.542621  352746 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem -> 42902.pem in /etc/ssl/certs
	I1212 01:51:56.542729  352746 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:51:56.550126  352746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /etc/ssl/certs/42902.pem (1708 bytes)
	I1212 01:51:56.568150  352746 start.go:296] duration metric: took 154.250612ms for postStartSetup
	I1212 01:51:56.568519  352746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-341847
	I1212 01:51:56.585322  352746 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/bridge-341847/config.json ...
	I1212 01:51:56.585604  352746 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 01:51:56.585660  352746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-341847
	I1212 01:51:56.602476  352746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/bridge-341847/id_rsa Username:docker}
	I1212 01:51:56.703994  352746 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 01:51:56.708938  352746 start.go:128] duration metric: took 9.990508504s to createHost
	I1212 01:51:56.708968  352746 start.go:83] releasing machines lock for "bridge-341847", held for 9.990647475s
	I1212 01:51:56.709054  352746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-341847
	I1212 01:51:56.726011  352746 ssh_runner.go:195] Run: cat /version.json
	I1212 01:51:56.726062  352746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-341847
	I1212 01:51:56.726158  352746 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:51:56.726211  352746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-341847
	I1212 01:51:56.750109  352746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/bridge-341847/id_rsa Username:docker}
	I1212 01:51:56.761200  352746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/bridge-341847/id_rsa Username:docker}
	I1212 01:51:56.948892  352746 ssh_runner.go:195] Run: systemctl --version
	I1212 01:51:56.955482  352746 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:51:56.959904  352746 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:51:56.960031  352746 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:51:56.987375  352746 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1212 01:51:56.987396  352746 start.go:496] detecting cgroup driver to use...
	I1212 01:51:56.987426  352746 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 01:51:56.987473  352746 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 01:51:57.004806  352746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 01:51:57.018549  352746 docker.go:218] disabling cri-docker service (if available) ...
	I1212 01:51:57.018613  352746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:51:57.036133  352746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:51:57.055206  352746 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:51:57.172754  352746 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:51:57.295965  352746 docker.go:234] disabling docker service ...
	I1212 01:51:57.296073  352746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:51:57.316586  352746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:51:57.329714  352746 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:51:57.452784  352746 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:51:57.580229  352746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:51:57.594761  352746 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:51:57.608978  352746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1212 01:51:57.618848  352746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 01:51:57.627630  352746 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 01:51:57.627755  352746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 01:51:57.636776  352746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 01:51:57.646498  352746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 01:51:57.655546  352746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 01:51:57.664728  352746 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:51:57.672810  352746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 01:51:57.681659  352746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1212 01:51:57.690529  352746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1212 01:51:57.699966  352746 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:51:57.707538  352746 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:51:57.715206  352746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:51:57.824614  352746 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 01:51:57.963790  352746 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1212 01:51:57.963907  352746 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1212 01:51:57.968147  352746 start.go:564] Will wait 60s for crictl version
	I1212 01:51:57.968261  352746 ssh_runner.go:195] Run: which crictl
	I1212 01:51:57.972204  352746 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 01:51:57.997708  352746 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1212 01:51:57.997820  352746 ssh_runner.go:195] Run: containerd --version
	I1212 01:51:58.022937  352746 ssh_runner.go:195] Run: containerd --version
	I1212 01:51:58.046863  352746 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 2.2.0 ...
	I1212 01:51:58.050012  352746 cli_runner.go:164] Run: docker network inspect bridge-341847 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 01:51:58.069365  352746 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1212 01:51:58.073372  352746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:51:58.083441  352746 kubeadm.go:884] updating cluster {Name:bridge-341847 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:bridge-341847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:51:58.083561  352746 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1212 01:51:58.083630  352746 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:51:58.113387  352746 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 01:51:58.113412  352746 containerd.go:534] Images already preloaded, skipping extraction
	I1212 01:51:58.113475  352746 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:51:58.140417  352746 containerd.go:627] all images are preloaded for containerd runtime.
	I1212 01:51:58.140440  352746 cache_images.go:86] Images are preloaded, skipping loading
	I1212 01:51:58.140448  352746 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 containerd true true} ...
	I1212 01:51:58.140542  352746 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-341847 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:bridge-341847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I1212 01:51:58.140611  352746 ssh_runner.go:195] Run: sudo crictl info
	I1212 01:51:58.168961  352746 cni.go:84] Creating CNI manager for "bridge"
	I1212 01:51:58.169001  352746 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 01:51:58.169055  352746 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-341847 NodeName:bridge-341847 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 01:51:58.169225  352746 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "bridge-341847"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:51:58.169300  352746 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 01:51:58.177259  352746 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 01:51:58.177326  352746 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:51:58.186706  352746 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1212 01:51:58.200149  352746 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 01:51:58.213603  352746 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2226 bytes)
	I1212 01:51:58.227400  352746 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1212 01:51:58.230915  352746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:51:58.240840  352746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:51:58.363967  352746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:51:58.379981  352746 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/bridge-341847 for IP: 192.168.76.2
	I1212 01:51:58.380004  352746 certs.go:195] generating shared ca certs ...
	I1212 01:51:58.380020  352746 certs.go:227] acquiring lock for ca certs: {Name:mk18ed2fce74cbc4ee01c0f71e2dbdd98ccce1cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:51:58.380209  352746 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key
	I1212 01:51:58.380273  352746 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key
	I1212 01:51:58.380288  352746 certs.go:257] generating profile certs ...
	I1212 01:51:58.380357  352746 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/bridge-341847/client.key
	I1212 01:51:58.380376  352746 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/bridge-341847/client.crt with IP's: []
	I1212 01:51:58.695670  352746 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/bridge-341847/client.crt ...
	I1212 01:51:58.695705  352746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/bridge-341847/client.crt: {Name:mk4b7a6382bf1bfb85eec2692b357acd61fc76e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:51:58.695907  352746 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/bridge-341847/client.key ...
	I1212 01:51:58.695921  352746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/bridge-341847/client.key: {Name:mkf9a6f40e1070d4f9f13a6bd5901f40dfae8a3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:51:58.696041  352746 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/bridge-341847/apiserver.key.92a91d5d
	I1212 01:51:58.696061  352746 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/bridge-341847/apiserver.crt.92a91d5d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1212 01:51:58.866934  352746 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/bridge-341847/apiserver.crt.92a91d5d ...
	I1212 01:51:58.866968  352746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/bridge-341847/apiserver.crt.92a91d5d: {Name:mkb09f127b34209d194c120fcbd4be17bac613ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:51:58.867167  352746 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/bridge-341847/apiserver.key.92a91d5d ...
	I1212 01:51:58.867185  352746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/bridge-341847/apiserver.key.92a91d5d: {Name:mk7f4e2dd9a8334fc63757be61ba142659a5b27d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:51:58.867272  352746 certs.go:382] copying /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/bridge-341847/apiserver.crt.92a91d5d -> /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/bridge-341847/apiserver.crt
	I1212 01:51:58.867349  352746 certs.go:386] copying /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/bridge-341847/apiserver.key.92a91d5d -> /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/bridge-341847/apiserver.key
	I1212 01:51:58.867409  352746 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/bridge-341847/proxy-client.key
	I1212 01:51:58.867427  352746 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/bridge-341847/proxy-client.crt with IP's: []
	I1212 01:51:59.025960  352746 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/bridge-341847/proxy-client.crt ...
	I1212 01:51:59.025994  352746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/bridge-341847/proxy-client.crt: {Name:mk89e5cf52b759523fa3a6275c2549c89708ebde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:51:59.026179  352746 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/bridge-341847/proxy-client.key ...
	I1212 01:51:59.026192  352746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/bridge-341847/proxy-client.key: {Name:mkd544c87219f023e769b753c9b42d51ce3f47dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:51:59.026381  352746 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem (1338 bytes)
	W1212 01:51:59.026432  352746 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290_empty.pem, impossibly tiny 0 bytes
	I1212 01:51:59.026446  352746 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 01:51:59.026474  352746 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem (1082 bytes)
	I1212 01:51:59.026505  352746 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:51:59.026533  352746 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem (1675 bytes)
	I1212 01:51:59.026583  352746 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem (1708 bytes)
	I1212 01:51:59.027183  352746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:51:59.050175  352746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 01:51:59.070962  352746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:51:59.092586  352746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:51:59.111692  352746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/bridge-341847/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1212 01:51:59.131958  352746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/bridge-341847/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 01:51:59.150001  352746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/bridge-341847/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:51:59.167827  352746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/bridge-341847/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 01:51:59.185710  352746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /usr/share/ca-certificates/42902.pem (1708 bytes)
	I1212 01:51:59.203142  352746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:51:59.221262  352746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem --> /usr/share/ca-certificates/4290.pem (1338 bytes)
	I1212 01:51:59.239035  352746 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:51:59.252191  352746 ssh_runner.go:195] Run: openssl version
	I1212 01:51:59.258469  352746 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42902.pem
	I1212 01:51:59.265726  352746 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42902.pem /etc/ssl/certs/42902.pem
	I1212 01:51:59.273265  352746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42902.pem
	I1212 01:51:59.277379  352746 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:06 /usr/share/ca-certificates/42902.pem
	I1212 01:51:59.277457  352746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42902.pem
	I1212 01:51:59.325928  352746 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 01:51:59.334183  352746 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/42902.pem /etc/ssl/certs/3ec20f2e.0
	I1212 01:51:59.342349  352746 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:51:59.350956  352746 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 01:51:59.359776  352746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:51:59.363818  352746 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:51:59.363884  352746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:51:59.405055  352746 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 01:51:59.412650  352746 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 01:51:59.420124  352746 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4290.pem
	I1212 01:51:59.427760  352746 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4290.pem /etc/ssl/certs/4290.pem
	I1212 01:51:59.435442  352746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4290.pem
	I1212 01:51:59.439311  352746 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:06 /usr/share/ca-certificates/4290.pem
	I1212 01:51:59.439436  352746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4290.pem
	I1212 01:51:59.480327  352746 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 01:51:59.488085  352746 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4290.pem /etc/ssl/certs/51391683.0
	I1212 01:51:59.495395  352746 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:51:59.499130  352746 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 01:51:59.499216  352746 kubeadm.go:401] StartCluster: {Name:bridge-341847 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:bridge-341847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:51:59.499309  352746 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1212 01:51:59.499392  352746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:51:59.524572  352746 cri.go:89] found id: ""
	I1212 01:51:59.524698  352746 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:51:59.532492  352746 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:51:59.540213  352746 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 01:51:59.540334  352746 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:51:59.548203  352746 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:51:59.548220  352746 kubeadm.go:158] found existing configuration files:
	
	I1212 01:51:59.548272  352746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:51:59.556095  352746 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:51:59.556180  352746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:51:59.563915  352746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:51:59.571573  352746 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:51:59.571649  352746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:51:59.578750  352746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:51:59.586391  352746 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:51:59.586533  352746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:51:59.594020  352746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:51:59.601569  352746 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:51:59.601644  352746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:51:59.608833  352746 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 01:51:59.651185  352746 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1212 01:51:59.651381  352746 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 01:51:59.672661  352746 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 01:51:59.672773  352746 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 01:51:59.672829  352746 kubeadm.go:319] OS: Linux
	I1212 01:51:59.672898  352746 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 01:51:59.672962  352746 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 01:51:59.673034  352746 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 01:51:59.673097  352746 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 01:51:59.673171  352746 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 01:51:59.673241  352746 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 01:51:59.673328  352746 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 01:51:59.673409  352746 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 01:51:59.673487  352746 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 01:51:59.737476  352746 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:51:59.737667  352746 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:51:59.737792  352746 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:51:59.743730  352746 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:51:59.748806  352746 out.go:252]   - Generating certificates and keys ...
	I1212 01:51:59.748958  352746 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 01:51:59.749069  352746 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 01:52:00.194060  352746 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 01:52:00.572148  352746 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 01:52:01.632134  352746 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 01:52:02.326270  352746 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 01:52:02.520168  352746 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 01:52:02.520385  352746 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [bridge-341847 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1212 01:52:03.653624  352746 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 01:52:03.654005  352746 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [bridge-341847 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1212 01:52:03.816559  352746 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 01:52:04.257362  352746 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 01:52:04.696684  352746 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 01:52:04.697057  352746 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:52:07.069158  352746 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:52:07.614838  352746 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 01:52:07.785249  352746 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:52:09.042246  352746 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:52:10.016308  352746 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:52:10.017836  352746 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:52:10.020562  352746 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:52:10.024446  352746 out.go:252]   - Booting up control plane ...
	I1212 01:52:10.024561  352746 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:52:10.024646  352746 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:52:10.024712  352746 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:52:10.043270  352746 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:52:10.043671  352746 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 01:52:10.052900  352746 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 01:52:10.053314  352746 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:52:10.054578  352746 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 01:52:10.198764  352746 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 01:52:10.198889  352746 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 01:52:11.203429  352746 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001877253s
	I1212 01:52:11.205217  352746 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1212 01:52:11.205553  352746 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1212 01:52:11.205788  352746 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1212 01:52:11.206658  352746 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1212 01:52:15.857089  352746 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.650247719s
	I1212 01:52:16.130945  352746 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.923813475s
	I1212 01:52:17.707754  352746 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501542092s
	I1212 01:52:17.739963  352746 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 01:52:17.754601  352746 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 01:52:17.767060  352746 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 01:52:17.767265  352746 kubeadm.go:319] [mark-control-plane] Marking the node bridge-341847 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 01:52:17.780071  352746 kubeadm.go:319] [bootstrap-token] Using token: hk2hoj.4btubxs1h2pxt5gj
	I1212 01:52:17.783215  352746 out.go:252]   - Configuring RBAC rules ...
	I1212 01:52:17.783341  352746 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 01:52:17.792562  352746 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 01:52:17.808504  352746 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 01:52:17.815742  352746 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 01:52:17.824102  352746 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 01:52:17.833065  352746 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 01:52:18.115037  352746 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 01:52:18.547554  352746 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1212 01:52:19.114692  352746 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1212 01:52:19.115995  352746 kubeadm.go:319] 
	I1212 01:52:19.116067  352746 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1212 01:52:19.116072  352746 kubeadm.go:319] 
	I1212 01:52:19.116150  352746 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1212 01:52:19.116154  352746 kubeadm.go:319] 
	I1212 01:52:19.116179  352746 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1212 01:52:19.116238  352746 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 01:52:19.116288  352746 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 01:52:19.116292  352746 kubeadm.go:319] 
	I1212 01:52:19.116345  352746 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1212 01:52:19.116349  352746 kubeadm.go:319] 
	I1212 01:52:19.116397  352746 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 01:52:19.116401  352746 kubeadm.go:319] 
	I1212 01:52:19.116452  352746 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1212 01:52:19.116528  352746 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 01:52:19.116596  352746 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 01:52:19.116601  352746 kubeadm.go:319] 
	I1212 01:52:19.116685  352746 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 01:52:19.116762  352746 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1212 01:52:19.116766  352746 kubeadm.go:319] 
	I1212 01:52:19.116850  352746 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token hk2hoj.4btubxs1h2pxt5gj \
	I1212 01:52:19.116960  352746 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:afcc53ad074d6c1edbf934e87f29b46b63bfa667710db88570d6339eb754c50c \
	I1212 01:52:19.116980  352746 kubeadm.go:319] 	--control-plane 
	I1212 01:52:19.116984  352746 kubeadm.go:319] 
	I1212 01:52:19.117069  352746 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1212 01:52:19.117073  352746 kubeadm.go:319] 
	I1212 01:52:19.117155  352746 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token hk2hoj.4btubxs1h2pxt5gj \
	I1212 01:52:19.117258  352746 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:afcc53ad074d6c1edbf934e87f29b46b63bfa667710db88570d6339eb754c50c 
	I1212 01:52:19.121987  352746 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1212 01:52:19.122221  352746 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 01:52:19.122331  352746 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:52:19.122351  352746 cni.go:84] Creating CNI manager for "bridge"
	I1212 01:52:19.125569  352746 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 01:52:19.128436  352746 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 01:52:19.136248  352746 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1212 01:52:19.151608  352746 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 01:52:19.151678  352746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:52:19.151730  352746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-341847 minikube.k8s.io/updated_at=2025_12_12T01_52_19_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c04ca15b4c226075dd018d362cd996ac712bf2c0 minikube.k8s.io/name=bridge-341847 minikube.k8s.io/primary=true
	I1212 01:52:19.341629  352746 ops.go:34] apiserver oom_adj: -16
	I1212 01:52:19.355154  352746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:52:19.855549  352746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:52:20.355878  352746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:52:20.855712  352746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:52:21.355276  352746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:52:21.855270  352746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:52:22.355761  352746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:52:22.856206  352746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:52:23.356112  352746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:52:23.435795  352746 kubeadm.go:1114] duration metric: took 4.284171653s to wait for elevateKubeSystemPrivileges
	I1212 01:52:23.435828  352746 kubeadm.go:403] duration metric: took 23.93661694s to StartCluster
	I1212 01:52:23.435845  352746 settings.go:142] acquiring lock: {Name:mk6dd4250df69aeba4752e9f33aeef37272375c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:52:23.435906  352746 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 01:52:23.436873  352746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/kubeconfig: {Name:mk3e2cbffa089f0a26351f5f6a49b8ed3b66edd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:52:23.437081  352746 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1212 01:52:23.437165  352746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 01:52:23.437407  352746 config.go:182] Loaded profile config "bridge-341847": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1212 01:52:23.437451  352746 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 01:52:23.437510  352746 addons.go:70] Setting storage-provisioner=true in profile "bridge-341847"
	I1212 01:52:23.437527  352746 addons.go:239] Setting addon storage-provisioner=true in "bridge-341847"
	I1212 01:52:23.437551  352746 host.go:66] Checking if "bridge-341847" exists ...
	I1212 01:52:23.438125  352746 cli_runner.go:164] Run: docker container inspect bridge-341847 --format={{.State.Status}}
	I1212 01:52:23.438480  352746 addons.go:70] Setting default-storageclass=true in profile "bridge-341847"
	I1212 01:52:23.438504  352746 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "bridge-341847"
	I1212 01:52:23.438763  352746 cli_runner.go:164] Run: docker container inspect bridge-341847 --format={{.State.Status}}
	I1212 01:52:23.441740  352746 out.go:179] * Verifying Kubernetes components...
	I1212 01:52:23.445930  352746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:52:23.480525  352746 addons.go:239] Setting addon default-storageclass=true in "bridge-341847"
	I1212 01:52:23.480564  352746 host.go:66] Checking if "bridge-341847" exists ...
	I1212 01:52:23.480974  352746 cli_runner.go:164] Run: docker container inspect bridge-341847 --format={{.State.Status}}
	I1212 01:52:23.484066  352746 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:52:23.486728  352746 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:52:23.486750  352746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 01:52:23.486818  352746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-341847
	I1212 01:52:23.518210  352746 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 01:52:23.518236  352746 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 01:52:23.518302  352746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-341847
	I1212 01:52:23.542295  352746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/bridge-341847/id_rsa Username:docker}
	I1212 01:52:23.552559  352746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/bridge-341847/id_rsa Username:docker}
	I1212 01:52:23.688508  352746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 01:52:23.725243  352746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:52:23.771838  352746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:52:23.876531  352746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:52:24.264920  352746 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1212 01:52:24.265850  352746 node_ready.go:35] waiting up to 15m0s for node "bridge-341847" to be "Ready" ...
	I1212 01:52:24.302695  352746 node_ready.go:49] node "bridge-341847" is "Ready"
	I1212 01:52:24.302725  352746 node_ready.go:38] duration metric: took 36.855468ms for node "bridge-341847" to be "Ready" ...
	I1212 01:52:24.302740  352746 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:52:24.302814  352746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:52:24.725796  352746 api_server.go:72] duration metric: took 1.288683517s to wait for apiserver process to appear ...
	I1212 01:52:24.725820  352746 api_server.go:88] waiting for apiserver healthz status ...
	I1212 01:52:24.725838  352746 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1212 01:52:24.739101  352746 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1212 01:52:24.740281  352746 api_server.go:141] control plane version: v1.34.2
	I1212 01:52:24.740333  352746 api_server.go:131] duration metric: took 14.504674ms to wait for apiserver health ...
	I1212 01:52:24.740358  352746 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 01:52:24.751890  352746 system_pods.go:59] 8 kube-system pods found
	I1212 01:52:24.751968  352746 system_pods.go:61] "coredns-66bc5c9577-5wfrh" [02b83a03-3179-435d-9a16-691fd432544f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:52:24.751997  352746 system_pods.go:61] "coredns-66bc5c9577-rnbq6" [907cad13-2e96-4333-8f57-8102d12baccd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:52:24.752034  352746 system_pods.go:61] "etcd-bridge-341847" [ae47edf4-f989-4b53-91cc-9e9ba0c5d53b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 01:52:24.752057  352746 system_pods.go:61] "kube-apiserver-bridge-341847" [b24b91bc-0f10-47b9-94a8-d315e36ff265] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 01:52:24.752078  352746 system_pods.go:61] "kube-controller-manager-bridge-341847" [6dc34ccc-55db-46f7-9d55-a010c9ca8d6c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 01:52:24.752100  352746 system_pods.go:61] "kube-proxy-hblpk" [55fdf80f-a1e3-4bce-b3ed-ccf5faaf580c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 01:52:24.752137  352746 system_pods.go:61] "kube-scheduler-bridge-341847" [0e58c51c-5d7d-49f0-bb14-4b4cb269cf0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 01:52:24.752160  352746 system_pods.go:61] "storage-provisioner" [39e486cf-007b-4475-b41c-72ae4737e1c8] Pending
	I1212 01:52:24.752178  352746 system_pods.go:74] duration metric: took 11.802364ms to wait for pod list to return data ...
	I1212 01:52:24.752197  352746 default_sa.go:34] waiting for default service account to be created ...
	I1212 01:52:24.758701  352746 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1212 01:52:24.759258  352746 default_sa.go:45] found service account: "default"
	I1212 01:52:24.759281  352746 default_sa.go:55] duration metric: took 7.064467ms for default service account to be created ...
	I1212 01:52:24.759292  352746 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 01:52:24.761868  352746 addons.go:530] duration metric: took 1.324405001s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1212 01:52:24.762700  352746 system_pods.go:86] 8 kube-system pods found
	I1212 01:52:24.762725  352746 system_pods.go:89] "coredns-66bc5c9577-5wfrh" [02b83a03-3179-435d-9a16-691fd432544f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:52:24.762733  352746 system_pods.go:89] "coredns-66bc5c9577-rnbq6" [907cad13-2e96-4333-8f57-8102d12baccd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:52:24.762740  352746 system_pods.go:89] "etcd-bridge-341847" [ae47edf4-f989-4b53-91cc-9e9ba0c5d53b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 01:52:24.762747  352746 system_pods.go:89] "kube-apiserver-bridge-341847" [b24b91bc-0f10-47b9-94a8-d315e36ff265] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 01:52:24.762754  352746 system_pods.go:89] "kube-controller-manager-bridge-341847" [6dc34ccc-55db-46f7-9d55-a010c9ca8d6c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 01:52:24.762760  352746 system_pods.go:89] "kube-proxy-hblpk" [55fdf80f-a1e3-4bce-b3ed-ccf5faaf580c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 01:52:24.762765  352746 system_pods.go:89] "kube-scheduler-bridge-341847" [0e58c51c-5d7d-49f0-bb14-4b4cb269cf0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 01:52:24.762772  352746 system_pods.go:89] "storage-provisioner" [39e486cf-007b-4475-b41c-72ae4737e1c8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 01:52:24.762799  352746 retry.go:31] will retry after 253.53522ms: missing components: kube-dns, kube-proxy
	I1212 01:52:24.769178  352746 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-341847" context rescaled to 1 replicas
	I1212 01:52:25.022319  352746 system_pods.go:86] 8 kube-system pods found
	I1212 01:52:25.022363  352746 system_pods.go:89] "coredns-66bc5c9577-5wfrh" [02b83a03-3179-435d-9a16-691fd432544f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:52:25.022376  352746 system_pods.go:89] "coredns-66bc5c9577-rnbq6" [907cad13-2e96-4333-8f57-8102d12baccd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:52:25.022385  352746 system_pods.go:89] "etcd-bridge-341847" [ae47edf4-f989-4b53-91cc-9e9ba0c5d53b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 01:52:25.022394  352746 system_pods.go:89] "kube-apiserver-bridge-341847" [b24b91bc-0f10-47b9-94a8-d315e36ff265] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 01:52:25.022406  352746 system_pods.go:89] "kube-controller-manager-bridge-341847" [6dc34ccc-55db-46f7-9d55-a010c9ca8d6c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 01:52:25.022413  352746 system_pods.go:89] "kube-proxy-hblpk" [55fdf80f-a1e3-4bce-b3ed-ccf5faaf580c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 01:52:25.022421  352746 system_pods.go:89] "kube-scheduler-bridge-341847" [0e58c51c-5d7d-49f0-bb14-4b4cb269cf0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 01:52:25.022436  352746 system_pods.go:89] "storage-provisioner" [39e486cf-007b-4475-b41c-72ae4737e1c8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 01:52:25.022454  352746 retry.go:31] will retry after 332.431038ms: missing components: kube-dns, kube-proxy
	I1212 01:52:25.359069  352746 system_pods.go:86] 8 kube-system pods found
	I1212 01:52:25.359105  352746 system_pods.go:89] "coredns-66bc5c9577-5wfrh" [02b83a03-3179-435d-9a16-691fd432544f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:52:25.359114  352746 system_pods.go:89] "coredns-66bc5c9577-rnbq6" [907cad13-2e96-4333-8f57-8102d12baccd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:52:25.359122  352746 system_pods.go:89] "etcd-bridge-341847" [ae47edf4-f989-4b53-91cc-9e9ba0c5d53b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 01:52:25.359129  352746 system_pods.go:89] "kube-apiserver-bridge-341847" [b24b91bc-0f10-47b9-94a8-d315e36ff265] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 01:52:25.359141  352746 system_pods.go:89] "kube-controller-manager-bridge-341847" [6dc34ccc-55db-46f7-9d55-a010c9ca8d6c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 01:52:25.359161  352746 system_pods.go:89] "kube-proxy-hblpk" [55fdf80f-a1e3-4bce-b3ed-ccf5faaf580c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 01:52:25.359168  352746 system_pods.go:89] "kube-scheduler-bridge-341847" [0e58c51c-5d7d-49f0-bb14-4b4cb269cf0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 01:52:25.359175  352746 system_pods.go:89] "storage-provisioner" [39e486cf-007b-4475-b41c-72ae4737e1c8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 01:52:25.359196  352746 retry.go:31] will retry after 429.074963ms: missing components: kube-dns, kube-proxy
	I1212 01:52:25.792241  352746 system_pods.go:86] 8 kube-system pods found
	I1212 01:52:25.792276  352746 system_pods.go:89] "coredns-66bc5c9577-5wfrh" [02b83a03-3179-435d-9a16-691fd432544f] Failed / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:52:25.792287  352746 system_pods.go:89] "coredns-66bc5c9577-rnbq6" [907cad13-2e96-4333-8f57-8102d12baccd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:52:25.792297  352746 system_pods.go:89] "etcd-bridge-341847" [ae47edf4-f989-4b53-91cc-9e9ba0c5d53b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 01:52:25.792304  352746 system_pods.go:89] "kube-apiserver-bridge-341847" [b24b91bc-0f10-47b9-94a8-d315e36ff265] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 01:52:25.792308  352746 system_pods.go:89] "kube-controller-manager-bridge-341847" [6dc34ccc-55db-46f7-9d55-a010c9ca8d6c] Running
	I1212 01:52:25.792314  352746 system_pods.go:89] "kube-proxy-hblpk" [55fdf80f-a1e3-4bce-b3ed-ccf5faaf580c] Running
	I1212 01:52:25.792323  352746 system_pods.go:89] "kube-scheduler-bridge-341847" [0e58c51c-5d7d-49f0-bb14-4b4cb269cf0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 01:52:25.792327  352746 system_pods.go:89] "storage-provisioner" [39e486cf-007b-4475-b41c-72ae4737e1c8] Running
	I1212 01:52:25.792336  352746 system_pods.go:126] duration metric: took 1.033038753s to wait for k8s-apps to be running ...
	I1212 01:52:25.792349  352746 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 01:52:25.792403  352746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:52:25.805830  352746 system_svc.go:56] duration metric: took 13.473542ms WaitForService to wait for kubelet
	I1212 01:52:25.805860  352746 kubeadm.go:587] duration metric: took 2.368749802s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 01:52:25.805880  352746 node_conditions.go:102] verifying NodePressure condition ...
	I1212 01:52:25.809076  352746 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 01:52:25.809110  352746 node_conditions.go:123] node cpu capacity is 2
	I1212 01:52:25.809124  352746 node_conditions.go:105] duration metric: took 3.238733ms to run NodePressure ...
	I1212 01:52:25.809136  352746 start.go:242] waiting for startup goroutines ...
	I1212 01:52:25.809144  352746 start.go:247] waiting for cluster config update ...
	I1212 01:52:25.809158  352746 start.go:256] writing updated cluster config ...
	I1212 01:52:25.809437  352746 ssh_runner.go:195] Run: rm -f paused
	I1212 01:52:25.813067  352746 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 01:52:25.817722  352746 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rnbq6" in "kube-system" namespace to be "Ready" or be gone ...
	W1212 01:52:27.823636  352746 pod_ready.go:104] pod "coredns-66bc5c9577-rnbq6" is not "Ready", error: <nil>
	W1212 01:52:30.323775  352746 pod_ready.go:104] pod "coredns-66bc5c9577-rnbq6" is not "Ready", error: <nil>
	W1212 01:52:32.824093  352746 pod_ready.go:104] pod "coredns-66bc5c9577-rnbq6" is not "Ready", error: <nil>
	W1212 01:52:35.324549  352746 pod_ready.go:104] pod "coredns-66bc5c9577-rnbq6" is not "Ready", error: <nil>
	W1212 01:52:37.325746  352746 pod_ready.go:104] pod "coredns-66bc5c9577-rnbq6" is not "Ready", error: <nil>
	W1212 01:52:39.823478  352746 pod_ready.go:104] pod "coredns-66bc5c9577-rnbq6" is not "Ready", error: <nil>
	W1212 01:52:41.823778  352746 pod_ready.go:104] pod "coredns-66bc5c9577-rnbq6" is not "Ready", error: <nil>
	W1212 01:52:44.323434  352746 pod_ready.go:104] pod "coredns-66bc5c9577-rnbq6" is not "Ready", error: <nil>
	W1212 01:52:46.324480  352746 pod_ready.go:104] pod "coredns-66bc5c9577-rnbq6" is not "Ready", error: <nil>
	W1212 01:52:48.824367  352746 pod_ready.go:104] pod "coredns-66bc5c9577-rnbq6" is not "Ready", error: <nil>
	W1212 01:52:51.322830  352746 pod_ready.go:104] pod "coredns-66bc5c9577-rnbq6" is not "Ready", error: <nil>
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.451347340Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.451362208Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.451390508Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.451404473Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.451413802Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.451425445Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.451435169Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.451453918Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.451470123Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.451502804Z" level=info msg="Connect containerd service"
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.451753785Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.452300474Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.470473080Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.470539313Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.470573570Z" level=info msg="Start subscribing containerd event"
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.470624376Z" level=info msg="Start recovering state"
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.499034921Z" level=info msg="Start event monitor"
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.499222886Z" level=info msg="Start cni network conf syncer for default"
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.499311773Z" level=info msg="Start streaming server"
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.499396130Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.499649310Z" level=info msg="runtime interface starting up..."
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.499722058Z" level=info msg="starting plugins..."
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.499802846Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 01:33:16 no-preload-361053 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 12 01:33:16 no-preload-361053 containerd[555]: time="2025-12-12T01:33:16.501821533Z" level=info msg="containerd successfully booted in 0.072171s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 01:52:53.298167   10311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:52:53.298656   10311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:52:53.300177   10311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:52:53.300617   10311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1212 01:52:53.302112   10311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec11 23:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014465] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.504479] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.038126] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.726220] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.947343] kauditd_printk_skb: 36 callbacks suppressed
	[Dec12 00:40] hrtimer: interrupt took 11339963 ns
	
	
	==> kernel <==
	 01:52:53 up  2:35,  0 user,  load average: 1.13, 1.75, 1.64
	Linux no-preload-361053 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 01:52:50 no-preload-361053 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:52:50 no-preload-361053 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1564.
	Dec 12 01:52:50 no-preload-361053 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:52:50 no-preload-361053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:52:50 no-preload-361053 kubelet[10172]: E1212 01:52:50.840629   10172 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:52:50 no-preload-361053 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:52:50 no-preload-361053 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:52:51 no-preload-361053 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1565.
	Dec 12 01:52:51 no-preload-361053 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:52:51 no-preload-361053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:52:51 no-preload-361053 kubelet[10178]: E1212 01:52:51.585151   10178 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:52:51 no-preload-361053 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:52:51 no-preload-361053 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:52:52 no-preload-361053 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1566.
	Dec 12 01:52:52 no-preload-361053 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:52:52 no-preload-361053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:52:52 no-preload-361053 kubelet[10196]: E1212 01:52:52.296050   10196 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:52:52 no-preload-361053 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:52:52 no-preload-361053 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:52:53 no-preload-361053 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1567.
	Dec 12 01:52:53 no-preload-361053 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:52:53 no-preload-361053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 01:52:53 no-preload-361053 kubelet[10256]: E1212 01:52:53.109180   10256 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 01:52:53 no-preload-361053 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 01:52:53 no-preload-361053 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-361053 -n no-preload-361053
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-361053 -n no-preload-361053: exit status 2 (324.9474ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-361053" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (269.84s)
E1212 01:52:56.807502    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/custom-flannel-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:52:56.813912    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/custom-flannel-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:52:56.825659    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/custom-flannel-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:52:56.847111    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/custom-flannel-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:52:56.888536    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/custom-flannel-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:52:56.969966    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/custom-flannel-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:52:57.131906    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/custom-flannel-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:52:57.453985    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/custom-flannel-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:52:58.095860    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/custom-flannel-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:52:58.176366    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/auto-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (345/417)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.42
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.2/json-events 4.46
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.3
18 TestDownloadOnly/v1.34.2/DeleteAll 0.22
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.35.0-beta.0/json-events 4.25
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.21
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.16
30 TestBinaryMirror 0.61
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
36 TestAddons/Setup 152.84
38 TestAddons/serial/Volcano 40.75
40 TestAddons/serial/GCPAuth/Namespaces 0.19
41 TestAddons/serial/GCPAuth/FakeCredentials 9.88
44 TestAddons/parallel/Registry 16.26
45 TestAddons/parallel/RegistryCreds 0.99
46 TestAddons/parallel/Ingress 19.17
47 TestAddons/parallel/InspektorGadget 11.05
48 TestAddons/parallel/MetricsServer 5.87
50 TestAddons/parallel/CSI 45.7
51 TestAddons/parallel/Headlamp 17.26
52 TestAddons/parallel/CloudSpanner 6.6
53 TestAddons/parallel/LocalPath 53.05
54 TestAddons/parallel/NvidiaDevicePlugin 6.59
55 TestAddons/parallel/Yakd 12.06
57 TestAddons/StoppedEnableDisable 12.4
58 TestCertOptions 38.83
59 TestCertExpiration 221.46
61 TestForceSystemdFlag 37.46
62 TestForceSystemdEnv 35.94
63 TestDockerEnvContainerd 45.9
67 TestErrorSpam/setup 30.24
68 TestErrorSpam/start 0.84
69 TestErrorSpam/status 1.12
70 TestErrorSpam/pause 1.7
71 TestErrorSpam/unpause 1.8
72 TestErrorSpam/stop 1.63
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 79.23
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 7.17
79 TestFunctional/serial/KubeContext 0.07
80 TestFunctional/serial/KubectlGetPods 0.12
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.48
84 TestFunctional/serial/CacheCmd/cache/add_local 1.32
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.9
89 TestFunctional/serial/CacheCmd/cache/delete 0.11
90 TestFunctional/serial/MinikubeKubectlCmd 0.14
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
92 TestFunctional/serial/ExtraConfig 50.15
93 TestFunctional/serial/ComponentHealth 0.09
94 TestFunctional/serial/LogsCmd 1.48
95 TestFunctional/serial/LogsFileCmd 1.43
96 TestFunctional/serial/InvalidService 4.44
98 TestFunctional/parallel/ConfigCmd 0.46
99 TestFunctional/parallel/DashboardCmd 8.56
100 TestFunctional/parallel/DryRun 0.51
101 TestFunctional/parallel/InternationalLanguage 0.23
102 TestFunctional/parallel/StatusCmd 1.04
106 TestFunctional/parallel/ServiceCmdConnect 7.61
107 TestFunctional/parallel/AddonsCmd 0.15
108 TestFunctional/parallel/PersistentVolumeClaim 19.92
110 TestFunctional/parallel/SSHCmd 0.76
111 TestFunctional/parallel/CpCmd 2.5
113 TestFunctional/parallel/FileSync 0.37
114 TestFunctional/parallel/CertSync 2.19
118 TestFunctional/parallel/NodeLabels 0.1
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.72
122 TestFunctional/parallel/License 0.24
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.67
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.44
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
134 TestFunctional/parallel/ServiceCmd/DeployApp 8.21
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
136 TestFunctional/parallel/ProfileCmd/profile_list 0.45
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
138 TestFunctional/parallel/MountCmd/any-port 8.25
139 TestFunctional/parallel/ServiceCmd/List 0.51
140 TestFunctional/parallel/ServiceCmd/JSONOutput 0.53
141 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
142 TestFunctional/parallel/ServiceCmd/Format 0.42
143 TestFunctional/parallel/ServiceCmd/URL 0.42
144 TestFunctional/parallel/MountCmd/specific-port 2.33
145 TestFunctional/parallel/MountCmd/VerifyCleanup 2.67
146 TestFunctional/parallel/Version/short 0.07
147 TestFunctional/parallel/Version/components 1.35
148 TestFunctional/parallel/ImageCommands/ImageListShort 0.33
149 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
150 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
151 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
152 TestFunctional/parallel/ImageCommands/ImageBuild 3.76
153 TestFunctional/parallel/ImageCommands/Setup 0.64
154 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.28
155 TestFunctional/parallel/UpdateContextCmd/no_changes 0.26
156 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
157 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
158 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.32
159 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.48
160 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.43
161 TestFunctional/parallel/ImageCommands/ImageRemove 0.63
162 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.63
163 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.4
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.44
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.03
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.05
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.28
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.88
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.11
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 0.97
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.02
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.44
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.46
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.2
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.14
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.72
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 2.18
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.28
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.84
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.55
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.25
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.1
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.39
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.41
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.37
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.95
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.95
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.15
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.6
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.24
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.22
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.25
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.23
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.4
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.25
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.14
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 1.06
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.33
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.32
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.47
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.67
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.36
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.15
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.15
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.15
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.01
264 TestMultiControlPlane/serial/StartCluster 180.2
265 TestMultiControlPlane/serial/DeployApp 8.49
266 TestMultiControlPlane/serial/PingHostFromPods 1.61
267 TestMultiControlPlane/serial/AddWorkerNode 59.65
268 TestMultiControlPlane/serial/NodeLabels 0.1
269 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.1
270 TestMultiControlPlane/serial/CopyFile 20.14
271 TestMultiControlPlane/serial/StopSecondaryNode 13.03
272 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.82
273 TestMultiControlPlane/serial/RestartSecondaryNode 14.01
274 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.19
275 TestMultiControlPlane/serial/RestartClusterKeepsNodes 97.84
276 TestMultiControlPlane/serial/DeleteSecondaryNode 11.38
277 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.78
278 TestMultiControlPlane/serial/StopCluster 36.94
279 TestMultiControlPlane/serial/RestartCluster 59.26
280 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.78
281 TestMultiControlPlane/serial/AddSecondaryNode 48.91
282 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.03
287 TestJSONOutput/start/Command 77.2
288 TestJSONOutput/start/Audit 0
290 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
291 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
293 TestJSONOutput/pause/Command 0.76
294 TestJSONOutput/pause/Audit 0
296 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
299 TestJSONOutput/unpause/Command 0.64
300 TestJSONOutput/unpause/Audit 0
302 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/stop/Command 5.99
306 TestJSONOutput/stop/Audit 0
308 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
310 TestErrorJSONOutput 0.26
312 TestKicCustomNetwork/create_custom_network 41.75
313 TestKicCustomNetwork/use_default_bridge_network 36.99
314 TestKicExistingNetwork 28.98
315 TestKicCustomSubnet 37.29
316 TestKicStaticIP 35.37
317 TestMainNoArgs 0.05
318 TestMinikubeProfile 73.31
321 TestMountStart/serial/StartWithMountFirst 8.83
322 TestMountStart/serial/VerifyMountFirst 0.28
323 TestMountStart/serial/StartWithMountSecond 8.43
324 TestMountStart/serial/VerifyMountSecond 0.28
325 TestMountStart/serial/DeleteFirst 1.73
326 TestMountStart/serial/VerifyMountPostDelete 0.27
327 TestMountStart/serial/Stop 1.28
328 TestMountStart/serial/RestartStopped 7.75
329 TestMountStart/serial/VerifyMountPostStop 0.29
332 TestMultiNode/serial/FreshStart2Nodes 136.95
333 TestMultiNode/serial/DeployApp2Nodes 5.14
334 TestMultiNode/serial/PingHostFrom2Pods 0.97
335 TestMultiNode/serial/AddNode 29.14
336 TestMultiNode/serial/MultiNodeLabels 0.09
337 TestMultiNode/serial/ProfileList 0.71
338 TestMultiNode/serial/CopyFile 10.57
339 TestMultiNode/serial/StopNode 2.49
340 TestMultiNode/serial/StartAfterStop 7.73
341 TestMultiNode/serial/RestartKeepsNodes 72.73
342 TestMultiNode/serial/DeleteNode 5.62
343 TestMultiNode/serial/StopMultiNode 24.08
344 TestMultiNode/serial/RestartMultiNode 50.33
345 TestMultiNode/serial/ValidateNameConflict 34.25
350 TestPreload 138.71
352 TestScheduledStopUnix 109.24
355 TestInsufficientStorage 12.62
356 TestRunningBinaryUpgrade 61.83
359 TestMissingContainerUpgrade 125.78
362 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
363 TestPause/serial/Start 91.96
364 TestNoKubernetes/serial/StartWithK8s 44.09
365 TestNoKubernetes/serial/StartWithStopK8s 23.79
366 TestNoKubernetes/serial/Start 7.8
367 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
368 TestNoKubernetes/serial/VerifyK8sNotRunning 0.3
369 TestNoKubernetes/serial/ProfileList 1.13
370 TestNoKubernetes/serial/Stop 1.31
371 TestNoKubernetes/serial/StartNoArgs 6.86
372 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
373 TestPause/serial/SecondStartNoReconfiguration 9.07
374 TestPause/serial/Pause 0.87
375 TestPause/serial/VerifyStatus 0.42
376 TestPause/serial/Unpause 0.83
377 TestPause/serial/PauseAgain 1.16
378 TestPause/serial/DeletePaused 4.52
379 TestPause/serial/VerifyDeletedResources 0.17
380 TestStoppedBinaryUpgrade/Setup 1.05
381 TestStoppedBinaryUpgrade/Upgrade 304.06
382 TestStoppedBinaryUpgrade/MinikubeLogs 1.99
397 TestNetworkPlugins/group/false 3.54
402 TestStartStop/group/old-k8s-version/serial/FirstStart 71.79
404 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 86.28
405 TestStartStop/group/old-k8s-version/serial/DeployApp 9.4
406 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.42
407 TestStartStop/group/old-k8s-version/serial/Stop 12.17
408 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
409 TestStartStop/group/old-k8s-version/serial/SecondStart 48.49
410 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.52
411 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.34
412 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.63
413 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
414 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 49.83
415 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
416 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
417 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
418 TestStartStop/group/old-k8s-version/serial/Pause 3.18
420 TestStartStop/group/embed-certs/serial/FirstStart 78.9
421 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
422 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.14
423 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.29
424 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.08
427 TestStartStop/group/embed-certs/serial/DeployApp 8.32
428 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.07
429 TestStartStop/group/embed-certs/serial/Stop 12.18
430 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
431 TestStartStop/group/embed-certs/serial/SecondStart 53.27
432 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
433 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
434 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
435 TestStartStop/group/embed-certs/serial/Pause 3.05
440 TestStartStop/group/no-preload/serial/Stop 1.29
441 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
443 TestStartStop/group/newest-cni/serial/DeployApp 0
445 TestStartStop/group/newest-cni/serial/Stop 1.33
446 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
449 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
450 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
451 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
453 TestNetworkPlugins/group/auto/Start 79.64
454 TestNetworkPlugins/group/auto/KubeletFlags 0.31
455 TestNetworkPlugins/group/auto/NetCatPod 9.29
456 TestNetworkPlugins/group/auto/DNS 0.2
457 TestNetworkPlugins/group/auto/Localhost 0.18
458 TestNetworkPlugins/group/auto/HairPin 0.15
459 TestNetworkPlugins/group/kindnet/Start 78.87
460 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
461 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
462 TestNetworkPlugins/group/kindnet/NetCatPod 10.25
463 TestNetworkPlugins/group/kindnet/DNS 0.17
464 TestNetworkPlugins/group/kindnet/Localhost 0.15
465 TestNetworkPlugins/group/kindnet/HairPin 0.15
466 TestNetworkPlugins/group/calico/Start 57.76
467 TestNetworkPlugins/group/calico/ControllerPod 6.01
468 TestNetworkPlugins/group/calico/KubeletFlags 0.31
469 TestNetworkPlugins/group/calico/NetCatPod 9.3
470 TestNetworkPlugins/group/calico/DNS 0.18
471 TestNetworkPlugins/group/calico/Localhost 0.15
472 TestNetworkPlugins/group/calico/HairPin 0.14
473 TestNetworkPlugins/group/custom-flannel/Start 55.24
474 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
475 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.27
476 TestNetworkPlugins/group/custom-flannel/DNS 0.18
477 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
478 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
480 TestNetworkPlugins/group/enable-default-cni/Start 76.36
481 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.33
482 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.35
483 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
484 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
485 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
486 TestNetworkPlugins/group/flannel/Start 54.25
487 TestNetworkPlugins/group/flannel/ControllerPod 6
488 TestNetworkPlugins/group/flannel/KubeletFlags 0.35
489 TestNetworkPlugins/group/flannel/NetCatPod 9.27
490 TestNetworkPlugins/group/flannel/DNS 0.18
491 TestNetworkPlugins/group/flannel/Localhost 0.16
492 TestNetworkPlugins/group/flannel/HairPin 0.14
493 TestNetworkPlugins/group/bridge/Start 71.87
494 TestNetworkPlugins/group/bridge/KubeletFlags 0.33
495 TestNetworkPlugins/group/bridge/NetCatPod 8.26
496 TestNetworkPlugins/group/bridge/DNS 0.17
497 TestNetworkPlugins/group/bridge/Localhost 0.16
498 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.28.0/json-events (6.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-957488 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-957488 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.414948776s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1211 23:55:56.853685    4290 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1211 23:55:56.853763    4290 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-957488
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-957488: exit status 85 (88.671297ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-957488 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-957488 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/11 23:55:50
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 23:55:50.485507    4295 out.go:360] Setting OutFile to fd 1 ...
	I1211 23:55:50.485691    4295 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:55:50.485716    4295 out.go:374] Setting ErrFile to fd 2...
	I1211 23:55:50.485736    4295 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:55:50.486030    4295 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	W1211 23:55:50.486212    4295 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22101-2343/.minikube/config/config.json: open /home/jenkins/minikube-integration/22101-2343/.minikube/config/config.json: no such file or directory
	I1211 23:55:50.486676    4295 out.go:368] Setting JSON to true
	I1211 23:55:50.487554    4295 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2297,"bootTime":1765495054,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1211 23:55:50.487649    4295 start.go:143] virtualization:  
	I1211 23:55:50.493085    4295 out.go:99] [download-only-957488] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1211 23:55:50.493318    4295 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball: no such file or directory
	I1211 23:55:50.493444    4295 notify.go:221] Checking for updates...
	I1211 23:55:50.497487    4295 out.go:171] MINIKUBE_LOCATION=22101
	I1211 23:55:50.501719    4295 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 23:55:50.505096    4295 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1211 23:55:50.508382    4295 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	I1211 23:55:50.511683    4295 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1211 23:55:50.518108    4295 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1211 23:55:50.518429    4295 driver.go:422] Setting default libvirt URI to qemu:///system
	I1211 23:55:50.538434    4295 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1211 23:55:50.538553    4295 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 23:55:50.957864    4295 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-11 23:55:50.948433765 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 23:55:50.957967    4295 docker.go:319] overlay module found
	I1211 23:55:50.961184    4295 out.go:99] Using the docker driver based on user configuration
	I1211 23:55:50.961215    4295 start.go:309] selected driver: docker
	I1211 23:55:50.961223    4295 start.go:927] validating driver "docker" against <nil>
	I1211 23:55:50.961334    4295 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 23:55:51.018956    4295 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-11 23:55:51.009876611 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 23:55:51.019147    4295 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1211 23:55:51.019463    4295 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1211 23:55:51.019633    4295 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1211 23:55:51.022906    4295 out.go:171] Using Docker driver with root privileges
	I1211 23:55:51.026149    4295 cni.go:84] Creating CNI manager for ""
	I1211 23:55:51.026218    4295 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1211 23:55:51.026231    4295 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1211 23:55:51.026314    4295 start.go:353] cluster config:
	{Name:download-only-957488 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-957488 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 23:55:51.029263    4295 out.go:99] Starting "download-only-957488" primary control-plane node in "download-only-957488" cluster
	I1211 23:55:51.029288    4295 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1211 23:55:51.032270    4295 out.go:99] Pulling base image v0.0.48-1765275396-22083 ...
	I1211 23:55:51.032325    4295 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1211 23:55:51.032415    4295 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1211 23:55:51.049260    4295 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1211 23:55:51.049429    4295 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory
	I1211 23:55:51.049534    4295 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1211 23:55:51.155852    4295 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1211 23:55:51.155891    4295 cache.go:65] Caching tarball of preloaded images
	I1211 23:55:51.156070    4295 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1211 23:55:51.159515    4295 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1211 23:55:51.159546    4295 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1211 23:55:51.252128    4295 preload.go:295] Got checksum from GCS API "38d7f581f2fa4226c8af2c9106b982b7"
	I1211 23:55:51.252307    4295 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:38d7f581f2fa4226c8af2c9106b982b7 -> /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-957488 host does not exist
	  To start a cluster, run: "minikube start -p download-only-957488"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-957488
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (4.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-577600 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-577600 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (4.462338231s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (4.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1211 23:56:01.742259    4290 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
I1211 23:56:01.742293    4290 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-577600
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-577600: exit status 85 (295.783802ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-957488 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-957488 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ delete  │ -p download-only-957488                                                                                                                                                               │ download-only-957488 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ start   │ -o=json --download-only -p download-only-577600 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-577600 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/11 23:55:57
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 23:55:57.333502    4495 out.go:360] Setting OutFile to fd 1 ...
	I1211 23:55:57.333892    4495 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:55:57.333901    4495 out.go:374] Setting ErrFile to fd 2...
	I1211 23:55:57.333907    4495 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:55:57.334168    4495 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	I1211 23:55:57.334551    4495 out.go:368] Setting JSON to true
	I1211 23:55:57.335334    4495 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2304,"bootTime":1765495054,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1211 23:55:57.335396    4495 start.go:143] virtualization:  
	I1211 23:55:57.338659    4495 out.go:99] [download-only-577600] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1211 23:55:57.339022    4495 notify.go:221] Checking for updates...
	I1211 23:55:57.341747    4495 out.go:171] MINIKUBE_LOCATION=22101
	I1211 23:55:57.344726    4495 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 23:55:57.347695    4495 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1211 23:55:57.350590    4495 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	I1211 23:55:57.353454    4495 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1211 23:55:57.359219    4495 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1211 23:55:57.359575    4495 driver.go:422] Setting default libvirt URI to qemu:///system
	I1211 23:55:57.401844    4495 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1211 23:55:57.402000    4495 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 23:55:57.488565    4495 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:48 SystemTime:2025-12-11 23:55:57.47732589 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 23:55:57.488681    4495 docker.go:319] overlay module found
	I1211 23:55:57.491713    4495 out.go:99] Using the docker driver based on user configuration
	I1211 23:55:57.491746    4495 start.go:309] selected driver: docker
	I1211 23:55:57.491752    4495 start.go:927] validating driver "docker" against <nil>
	I1211 23:55:57.491856    4495 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 23:55:57.548218    4495 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:48 SystemTime:2025-12-11 23:55:57.539083365 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 23:55:57.548376    4495 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1211 23:55:57.548633    4495 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1211 23:55:57.548792    4495 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1211 23:55:57.552005    4495 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-577600 host does not exist
	  To start a cluster, run: "minikube start -p download-only-577600"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-577600
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (4.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-489486 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-489486 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (4.252233697s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (4.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1211 23:56:06.646056    4290 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1211 23:56:06.646089    4290 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-489486
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-489486: exit status 85 (84.023545ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                             ARGS                                                                                             │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-957488 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd        │ download-only-957488 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ delete  │ -p download-only-957488                                                                                                                                                                      │ download-only-957488 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │ 11 Dec 25 23:55 UTC │
	│ start   │ -o=json --download-only -p download-only-577600 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd        │ download-only-577600 │ jenkins │ v1.37.0 │ 11 Dec 25 23:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 11 Dec 25 23:56 UTC │ 11 Dec 25 23:56 UTC │
	│ delete  │ -p download-only-577600                                                                                                                                                                      │ download-only-577600 │ jenkins │ v1.37.0 │ 11 Dec 25 23:56 UTC │ 11 Dec 25 23:56 UTC │
	│ start   │ -o=json --download-only -p download-only-489486 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-489486 │ jenkins │ v1.37.0 │ 11 Dec 25 23:56 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/11 23:56:02
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 23:56:02.436195    4699 out.go:360] Setting OutFile to fd 1 ...
	I1211 23:56:02.436305    4699 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:56:02.436341    4699 out.go:374] Setting ErrFile to fd 2...
	I1211 23:56:02.436358    4699 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 23:56:02.436645    4699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	I1211 23:56:02.437038    4699 out.go:368] Setting JSON to true
	I1211 23:56:02.437736    4699 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2309,"bootTime":1765495054,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1211 23:56:02.437799    4699 start.go:143] virtualization:  
	I1211 23:56:02.441360    4699 out.go:99] [download-only-489486] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1211 23:56:02.441597    4699 notify.go:221] Checking for updates...
	I1211 23:56:02.444780    4699 out.go:171] MINIKUBE_LOCATION=22101
	I1211 23:56:02.447901    4699 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 23:56:02.451108    4699 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1211 23:56:02.454182    4699 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	I1211 23:56:02.457273    4699 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1211 23:56:02.462981    4699 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1211 23:56:02.463285    4699 driver.go:422] Setting default libvirt URI to qemu:///system
	I1211 23:56:02.484610    4699 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1211 23:56:02.484719    4699 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 23:56:02.552179    4699 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-11 23:56:02.543375698 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 23:56:02.552280    4699 docker.go:319] overlay module found
	I1211 23:56:02.555381    4699 out.go:99] Using the docker driver based on user configuration
	I1211 23:56:02.555422    4699 start.go:309] selected driver: docker
	I1211 23:56:02.555441    4699 start.go:927] validating driver "docker" against <nil>
	I1211 23:56:02.555541    4699 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 23:56:02.611709    4699 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-11 23:56:02.602160184 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 23:56:02.611898    4699 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1211 23:56:02.612196    4699 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1211 23:56:02.612346    4699 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1211 23:56:02.615424    4699 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-489486 host does not exist
	  To start a cluster, run: "minikube start -p download-only-489486"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-489486
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I1211 23:56:07.923866    4290 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-427646 --alsologtostderr --binary-mirror http://127.0.0.1:42721 --driver=docker  --container-runtime=containerd
helpers_test.go:176: Cleaning up "binary-mirror-427646" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-427646
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-962736
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-962736: exit status 85 (71.271411ms)

                                                
                                                
-- stdout --
	* Profile "addons-962736" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-962736"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-962736
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-962736: exit status 85 (75.991115ms)

                                                
                                                
-- stdout --
	* Profile "addons-962736" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-962736"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (152.84s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-962736 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-962736 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m32.837297604s)
--- PASS: TestAddons/Setup (152.84s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.75s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:886: volcano-controller stabilized in 54.107376ms
addons_test.go:870: volcano-scheduler stabilized in 54.273466ms
addons_test.go:878: volcano-admission stabilized in 54.199824ms
addons_test.go:892: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-scheduler-76c996c8bf-kvlhm" [ba325726-0f5d-4777-bc6a-056052c6a627] Running
addons_test.go:892: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004642165s
addons_test.go:896: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-admission-6c447bd768-9vfrf" [b7d87cf2-6790-4073-8501-2d9174fdb3c4] Running
addons_test.go:896: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.003128755s
addons_test.go:900: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-controllers-6fd4f85cb8-p4886" [e20c6369-450e-47f5-b067-5711e7dd044a] Running
addons_test.go:900: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003390011s
addons_test.go:905: (dbg) Run:  kubectl --context addons-962736 delete -n volcano-system job volcano-admission-init
addons_test.go:911: (dbg) Run:  kubectl --context addons-962736 create -f testdata/vcjob.yaml
addons_test.go:919: (dbg) Run:  kubectl --context addons-962736 get vcjob -n my-volcano
addons_test.go:937: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:353: "test-job-nginx-0" [9f5a77ea-c319-4071-87ab-215acfabd845] Pending
helpers_test.go:353: "test-job-nginx-0" [9f5a77ea-c319-4071-87ab-215acfabd845] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "test-job-nginx-0" [9f5a77ea-c319-4071-87ab-215acfabd845] Running
addons_test.go:937: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 11.003437936s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-962736 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-962736 addons disable volcano --alsologtostderr -v=1: (12.044587676s)
--- PASS: TestAddons/serial/Volcano (40.75s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-962736 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-962736 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.88s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-962736 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-962736 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [b6f03961-bf41-440e-9d72-5d2f33cecb58] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [b6f03961-bf41-440e-9d72-5d2f33cecb58] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003464861s
addons_test.go:696: (dbg) Run:  kubectl --context addons-962736 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-962736 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-962736 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-962736 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.88s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 3.486077ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-xkt4d" [db0cca27-df61-4d8a-992e-7ef39f3e3429] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0086478s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-9c6wv" [b3342adc-0e75-4e9c-8333-080d1a884d38] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003215782s
addons_test.go:394: (dbg) Run:  kubectl --context addons-962736 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-962736 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-962736 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.039069508s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-arm64 -p addons-962736 ip
2025/12/11 23:59:56 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-962736 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.26s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.99s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 3.186555ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-962736
addons_test.go:334: (dbg) Run:  kubectl --context addons-962736 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-962736 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.99s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-962736 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-962736 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-962736 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [376b4d51-54e0-4a23-8f4a-772375f58d76] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [376b4d51-54e0-4a23-8f4a-772375f58d76] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.004283969s
I1212 00:00:24.383238    4290 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-arm64 -p addons-962736 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-962736 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p addons-962736 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-962736 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-962736 addons disable ingress-dns --alsologtostderr -v=1: (1.392200261s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-962736 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-962736 addons disable ingress --alsologtostderr -v=1: (8.009471772s)
--- PASS: TestAddons/parallel/Ingress (19.17s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.05s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-mj656" [9bc2bbdc-544a-48b0-bfc1-b0c2326dc283] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004462941s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-962736 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-962736 addons disable inspektor-gadget --alsologtostderr -v=1: (6.041530022s)
--- PASS: TestAddons/parallel/InspektorGadget (11.05s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.87s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 3.26082ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-phkgw" [e169b109-04b4-441c-a107-e9e4ff822cb5] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003545703s
addons_test.go:465: (dbg) Run:  kubectl --context addons-962736 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-962736 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.87s)

                                                
                                    
x
+
TestAddons/parallel/CSI (45.7s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1211 23:59:57.807792    4290 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1211 23:59:57.811380    4290 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1211 23:59:57.811412    4290 kapi.go:107] duration metric: took 7.308871ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 7.320768ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-962736 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-962736 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-962736 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-962736 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-962736 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-962736 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-962736 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-962736 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-962736 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-962736 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [e74b14a9-efba-4d3d-9cf9-561eb14df4d2] Pending
helpers_test.go:353: "task-pv-pod" [e74b14a9-efba-4d3d-9cf9-561eb14df4d2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [e74b14a9-efba-4d3d-9cf9-561eb14df4d2] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.003229841s
addons_test.go:574: (dbg) Run:  kubectl --context addons-962736 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-962736 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-962736 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-962736 delete pod task-pv-pod
addons_test.go:584: (dbg) Done: kubectl --context addons-962736 delete pod task-pv-pod: (1.233268451s)
addons_test.go:590: (dbg) Run:  kubectl --context addons-962736 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-962736 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-962736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-962736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-962736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-962736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-962736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-962736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-962736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-962736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-962736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-962736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-962736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-962736 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [073de28e-ddb1-4cf4-876e-17040b622656] Pending
helpers_test.go:353: "task-pv-pod-restore" [073de28e-ddb1-4cf4-876e-17040b622656] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [073de28e-ddb1-4cf4-876e-17040b622656] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003465455s
addons_test.go:616: (dbg) Run:  kubectl --context addons-962736 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-962736 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-962736 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-962736 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-962736 addons disable volumesnapshots --alsologtostderr -v=1: (1.01462139s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-962736 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-962736 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.880164054s)
--- PASS: TestAddons/parallel/CSI (45.70s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-962736 --alsologtostderr -v=1
addons_test.go:810: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-962736 --alsologtostderr -v=1: (1.187637614s)
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-dfcdc64b-v9pqx" [17ffc055-94f7-42f3-91b9-a3b77cd0a050] Pending
helpers_test.go:353: "headlamp-dfcdc64b-v9pqx" [17ffc055-94f7-42f3-91b9-a3b77cd0a050] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-dfcdc64b-v9pqx" [17ffc055-94f7-42f3-91b9-a3b77cd0a050] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.012961776s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-962736 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-962736 addons disable headlamp --alsologtostderr -v=1: (6.058728s)
--- PASS: TestAddons/parallel/Headlamp (17.26s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-k66g4" [5150575f-1cf0-4746-bb1a-0896688c2fc0] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003841674s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-962736 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.60s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.05s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-962736 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-962736 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-962736 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-962736 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-962736 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-962736 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-962736 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-962736 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [7bc738a0-c24c-4324-96ee-7219435b8f48] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [7bc738a0-c24c-4324-96ee-7219435b8f48] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [7bc738a0-c24c-4324-96ee-7219435b8f48] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003213821s
addons_test.go:969: (dbg) Run:  kubectl --context addons-962736 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-arm64 -p addons-962736 ssh "cat /opt/local-path-provisioner/pvc-83ce0872-52b7-47f8-820e-2c4600e20b5f_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-962736 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-962736 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-962736 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-962736 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.939026613s)
--- PASS: TestAddons/parallel/LocalPath (53.05s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.59s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-rm9m7" [bf54c5ed-6000-4385-9459-ce52e7ce37c7] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.00469772s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-962736 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.59s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.06s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-5ff678cb9-tp8gj" [2b84bf86-cc2d-4a09-ab2b-71c1f523e77d] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003723653s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-962736 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-962736 addons disable yakd --alsologtostderr -v=1: (6.056199626s)
--- PASS: TestAddons/parallel/Yakd (12.06s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-962736
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-962736: (12.122300458s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-962736
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-962736
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-962736
--- PASS: TestAddons/StoppedEnableDisable (12.40s)

                                                
                                    
x
+
TestCertOptions (38.83s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-805684 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-805684 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (35.692942115s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-805684 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-805684 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-805684 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-805684" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-805684
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-805684: (2.408045722s)
--- PASS: TestCertOptions (38.83s)

                                                
                                    
x
+
TestCertExpiration (221.46s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-262857 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
E1212 01:15:57.046346    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-262857 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (32.672345304s)
E1212 01:17:20.119166    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:18:41.615292    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:18:52.051837    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-262857 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-262857 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (6.329332411s)
helpers_test.go:176: Cleaning up "cert-expiration-262857" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-262857
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-262857: (2.461443708s)
--- PASS: TestCertExpiration (221.46s)

                                                
                                    
x
+
TestForceSystemdFlag (37.46s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-554365 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-554365 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (35.036051293s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-554365 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-flag-554365" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-554365
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-554365: (2.110186239s)
--- PASS: TestForceSystemdFlag (37.46s)

                                                
                                    
x
+
TestForceSystemdEnv (35.94s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-104389 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-104389 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (33.209298023s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-104389 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-env-104389" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-104389
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-104389: (2.323336808s)
--- PASS: TestForceSystemdEnv (35.94s)

                                                
                                    
x
+
TestDockerEnvContainerd (45.9s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-630271 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-630271 --driver=docker  --container-runtime=containerd: (29.868913047s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-630271"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-630271": (1.093103214s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-wnaAFfE9jj1v/agent.23742" SSH_AGENT_PID="23743" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-wnaAFfE9jj1v/agent.23742" SSH_AGENT_PID="23743" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-wnaAFfE9jj1v/agent.23742" SSH_AGENT_PID="23743" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.304967755s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-wnaAFfE9jj1v/agent.23742" SSH_AGENT_PID="23743" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:176: Cleaning up "dockerenv-630271" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-630271
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-630271: (2.21598787s)
--- PASS: TestDockerEnvContainerd (45.90s)

                                                
                                    
x
+
TestErrorSpam/setup (30.24s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-461629 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-461629 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-461629 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-461629 --driver=docker  --container-runtime=containerd: (30.238396864s)
--- PASS: TestErrorSpam/setup (30.24s)

                                                
                                    
x
+
TestErrorSpam/start (0.84s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-461629 --log_dir /tmp/nospam-461629 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-461629 --log_dir /tmp/nospam-461629 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-461629 --log_dir /tmp/nospam-461629 start --dry-run
--- PASS: TestErrorSpam/start (0.84s)

                                                
                                    
x
+
TestErrorSpam/status (1.12s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-461629 --log_dir /tmp/nospam-461629 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-461629 --log_dir /tmp/nospam-461629 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-461629 --log_dir /tmp/nospam-461629 status
--- PASS: TestErrorSpam/status (1.12s)

                                                
                                    
x
+
TestErrorSpam/pause (1.7s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-461629 --log_dir /tmp/nospam-461629 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-461629 --log_dir /tmp/nospam-461629 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-461629 --log_dir /tmp/nospam-461629 pause
--- PASS: TestErrorSpam/pause (1.70s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.8s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-461629 --log_dir /tmp/nospam-461629 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-461629 --log_dir /tmp/nospam-461629 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-461629 --log_dir /tmp/nospam-461629 unpause
--- PASS: TestErrorSpam/unpause (1.80s)

                                                
                                    
x
+
TestErrorSpam/stop (1.63s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-461629 --log_dir /tmp/nospam-461629 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-461629 --log_dir /tmp/nospam-461629 stop: (1.429305747s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-461629 --log_dir /tmp/nospam-461629 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-461629 --log_dir /tmp/nospam-461629 stop
--- PASS: TestErrorSpam/stop (1.63s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/test/nested/copy/4290/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (79.23s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-095481 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1212 00:03:41.621169    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:03:41.627966    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:03:41.639310    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:03:41.660678    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:03:41.702040    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:03:41.783410    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:03:41.944886    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:03:42.267982    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:03:42.910018    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:03:44.191393    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:03:46.752725    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:03:51.874077    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:04:02.116467    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:04:22.599148    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-095481 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m19.230037624s)
--- PASS: TestFunctional/serial/StartWithProxy (79.23s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.17s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1212 00:04:42.677535    4290 config.go:182] Loaded profile config "functional-095481": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-095481 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-095481 --alsologtostderr -v=8: (7.162884429s)
functional_test.go:678: soft start took 7.171828563s for "functional-095481" cluster.
I1212 00:04:49.848783    4290 config.go:182] Loaded profile config "functional-095481": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (7.17s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-095481 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-095481 cache add registry.k8s.io/pause:3.1: (1.316806733s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-095481 cache add registry.k8s.io/pause:3.3: (1.112352035s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-095481 cache add registry.k8s.io/pause:latest: (1.04607615s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-095481 /tmp/TestFunctionalserialCacheCmdcacheadd_local1112301639/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 cache add minikube-local-cache-test:functional-095481
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 cache delete minikube-local-cache-test:functional-095481
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-095481
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.9s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-095481 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (289.591095ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.90s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 kubectl -- --context functional-095481 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-095481 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (50.15s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-095481 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1212 00:05:03.561097    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-095481 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (50.142527081s)
functional_test.go:776: restart took 50.142619053s for "functional-095481" cluster.
I1212 00:05:47.704920    4290 config.go:182] Loaded profile config "functional-095481": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (50.15s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-095481 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-095481 logs: (1.475983818s)
--- PASS: TestFunctional/serial/LogsCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 logs --file /tmp/TestFunctionalserialLogsFileCmd516811064/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-095481 logs --file /tmp/TestFunctionalserialLogsFileCmd516811064/001/logs.txt: (1.430102906s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.43s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.44s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-095481 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-095481
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-095481: exit status 115 (620.014541ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31794 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-095481 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.44s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-095481 config get cpus: exit status 14 (72.351663ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-095481 config get cpus: exit status 14 (71.442789ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-095481 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-095481 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 39273: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.56s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-095481 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-095481 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (232.261535ms)

                                                
                                                
-- stdout --
	* [functional-095481] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22101
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:06:25.205152   38683 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:06:25.205318   38683 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:06:25.205362   38683 out.go:374] Setting ErrFile to fd 2...
	I1212 00:06:25.205383   38683 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:06:25.205749   38683 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	I1212 00:06:25.206737   38683 out.go:368] Setting JSON to false
	I1212 00:06:25.207705   38683 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2932,"bootTime":1765495054,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1212 00:06:25.207810   38683 start.go:143] virtualization:  
	I1212 00:06:25.213087   38683 out.go:179] * [functional-095481] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 00:06:25.216264   38683 notify.go:221] Checking for updates...
	I1212 00:06:25.217161   38683 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:06:25.221002   38683 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:06:25.224032   38683 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 00:06:25.227143   38683 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	I1212 00:06:25.230207   38683 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:06:25.233215   38683 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:06:25.236489   38683 config.go:182] Loaded profile config "functional-095481": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1212 00:06:25.237113   38683 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:06:25.261621   38683 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 00:06:25.261751   38683 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:06:25.366561   38683 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-12 00:06:25.352194487 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:06:25.366664   38683 docker.go:319] overlay module found
	I1212 00:06:25.370148   38683 out.go:179] * Using the docker driver based on existing profile
	I1212 00:06:25.373583   38683 start.go:309] selected driver: docker
	I1212 00:06:25.373610   38683 start.go:927] validating driver "docker" against &{Name:functional-095481 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-095481 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:06:25.373720   38683 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:06:25.376989   38683 out.go:203] 
	W1212 00:06:25.380119   38683 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1212 00:06:25.383093   38683 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-095481 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1212 00:06:25.482386    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/DryRun (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-095481 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-095481 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (225.165999ms)

                                                
                                                
-- stdout --
	* [functional-095481] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22101
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:06:24.986074   38635 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:06:24.986191   38635 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:06:24.986201   38635 out.go:374] Setting ErrFile to fd 2...
	I1212 00:06:24.986207   38635 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:06:24.987161   38635 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	I1212 00:06:24.987547   38635 out.go:368] Setting JSON to false
	I1212 00:06:24.988433   38635 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2931,"bootTime":1765495054,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1212 00:06:24.988502   38635 start.go:143] virtualization:  
	I1212 00:06:24.991890   38635 out.go:179] * [functional-095481] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1212 00:06:24.996123   38635 notify.go:221] Checking for updates...
	I1212 00:06:24.996953   38635 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:06:25.001515   38635 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:06:25.004939   38635 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 00:06:25.007899   38635 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	I1212 00:06:25.010941   38635 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:06:25.014063   38635 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:06:25.017786   38635 config.go:182] Loaded profile config "functional-095481": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1212 00:06:25.018545   38635 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:06:25.053772   38635 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 00:06:25.053892   38635 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:06:25.134335   38635 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-12 00:06:25.124501401 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:06:25.134436   38635 docker.go:319] overlay module found
	I1212 00:06:25.137752   38635 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1212 00:06:25.140537   38635 start.go:309] selected driver: docker
	I1212 00:06:25.140561   38635 start.go:927] validating driver "docker" against &{Name:functional-095481 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-095481 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:06:25.140663   38635 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:06:25.144311   38635 out.go:203] 
	W1212 00:06:25.147233   38635 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1212 00:06:25.150177   38635 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-095481 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-095481 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-2d98r" [3bed585c-e172-4eac-a8d4-1c38d24e6dc1] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-2d98r" [3bed585c-e172-4eac-a8d4-1c38d24e6dc1] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.007106319s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:30270
functional_test.go:1680: http://192.168.49.2:30270: success! body:
Request served by hello-node-connect-7d85dfc575-2d98r

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30270
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.61s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (19.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [fc68c480-11b4-4099-87d7-77fb4ccb5b93] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003571801s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-095481 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-095481 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-095481 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-095481 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [028829c6-a493-4ca3-8e3b-507006c2421a] Pending
helpers_test.go:353: "sp-pod" [028829c6-a493-4ca3-8e3b-507006c2421a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [028829c6-a493-4ca3-8e3b-507006c2421a] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003483575s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-095481 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-095481 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-095481 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [a37f9836-1abd-41cc-9b52-16497345dbe8] Pending
helpers_test.go:353: "sp-pod" [a37f9836-1abd-41cc-9b52-16497345dbe8] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004122648s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-095481 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (19.92s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 ssh -n functional-095481 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 cp functional-095481:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd660971418/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 ssh -n functional-095481 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 ssh -n functional-095481 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.50s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/4290/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 ssh "sudo cat /etc/test/nested/copy/4290/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/4290.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 ssh "sudo cat /etc/ssl/certs/4290.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/4290.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 ssh "sudo cat /usr/share/ca-certificates/4290.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/42902.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 ssh "sudo cat /etc/ssl/certs/42902.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/42902.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 ssh "sudo cat /usr/share/ca-certificates/42902.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-095481 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-095481 ssh "sudo systemctl is-active docker": exit status 1 (387.077456ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-095481 ssh "sudo systemctl is-active crio": exit status 1 (336.891104ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-095481 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-095481 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-095481 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 36070: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-095481 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-095481 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-095481 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [0708bbc9-ebc7-404f-906b-3c4529e9c7de] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [0708bbc9-ebc7-404f-906b-3c4529e9c7de] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003914722s
I1212 00:06:05.476836    4290 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-095481 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.147.68 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-095481 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-095481 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-095481 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-4n7tt" [51ae3aa7-5b89-4c5e-8bb0-1d3c7c35bddd] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-4n7tt" [51ae3aa7-5b89-4c5e-8bb0-1d3c7c35bddd] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003592786s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "393.602329ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "56.368223ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "355.476117ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "50.801586ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-095481 /tmp/TestFunctionalparallelMountCmdany-port1658846270/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765497978868836985" to /tmp/TestFunctionalparallelMountCmdany-port1658846270/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765497978868836985" to /tmp/TestFunctionalparallelMountCmdany-port1658846270/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765497978868836985" to /tmp/TestFunctionalparallelMountCmdany-port1658846270/001/test-1765497978868836985
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-095481 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (329.925784ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 00:06:19.202146    4290 retry.go:31] will retry after 641.119338ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 12 00:06 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 12 00:06 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 12 00:06 test-1765497978868836985
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 ssh cat /mount-9p/test-1765497978868836985
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-095481 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [03f7373a-40c1-467e-85bf-abcbb7d61c8a] Pending
helpers_test.go:353: "busybox-mount" [03f7373a-40c1-467e-85bf-abcbb7d61c8a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [03f7373a-40c1-467e-85bf-abcbb7d61c8a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [03f7373a-40c1-467e-85bf-abcbb7d61c8a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003980095s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-095481 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-095481 /tmp/TestFunctionalparallelMountCmdany-port1658846270/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 service list -o json
functional_test.go:1504: Took "532.949694ms" to run "out/minikube-linux-arm64 -p functional-095481 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30976
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30976
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-095481 /tmp/TestFunctionalparallelMountCmdspecific-port1914051617/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-095481 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (621.130129ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 00:06:27.735862    4290 retry.go:31] will retry after 465.78777ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-095481 /tmp/TestFunctionalparallelMountCmdspecific-port1914051617/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-095481 ssh "sudo umount -f /mount-9p": exit status 1 (320.973175ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-095481 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-095481 /tmp/TestFunctionalparallelMountCmdspecific-port1914051617/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-095481 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2146647612/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-095481 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2146647612/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-095481 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2146647612/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-095481 ssh "findmnt -T" /mount1: exit status 1 (870.255833ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 00:06:30.315171    4290 retry.go:31] will retry after 674.12462ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-095481 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-095481 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2146647612/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-095481 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2146647612/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-095481 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2146647612/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.67s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-095481 version -o=json --components: (1.352958154s)
--- PASS: TestFunctional/parallel/Version/components (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-095481 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-095481
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-095481
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-095481 image ls --format short --alsologtostderr:
I1212 00:06:40.097490   41777 out.go:360] Setting OutFile to fd 1 ...
I1212 00:06:40.097595   41777 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:06:40.097601   41777 out.go:374] Setting ErrFile to fd 2...
I1212 00:06:40.097606   41777 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:06:40.097977   41777 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
I1212 00:06:40.103305   41777 config.go:182] Loaded profile config "functional-095481": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1212 00:06:40.103476   41777 config.go:182] Loaded profile config "functional-095481": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1212 00:06:40.104042   41777 cli_runner.go:164] Run: docker container inspect functional-095481 --format={{.State.Status}}
I1212 00:06:40.127512   41777 ssh_runner.go:195] Run: systemctl --version
I1212 00:06:40.127569   41777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-095481
I1212 00:06:40.152540   41777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-095481/id_rsa Username:docker}
I1212 00:06:40.266191   41777 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-095481 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-apiserver              │ v1.34.2            │ sha256:b178af │ 24.6MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.2            │ sha256:94bff1 │ 22.8MB │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ docker.io/library/minikube-local-cache-test │ functional-095481  │ sha256:2af6a2 │ 992B   │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:138784 │ 20.4MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0            │ sha256:2c5f0d │ 21.1MB │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
│ registry.k8s.io/kube-scheduler              │ v1.34.2            │ sha256:4f982e │ 15.8MB │
│ docker.io/kicbase/echo-server               │ functional-095481  │ sha256:ce2d2c │ 2.17MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:1611cd │ 1.94MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
│ public.ecr.aws/nginx/nginx                  │ alpine             │ sha256:10afed │ 23MB   │
│ registry.k8s.io/kube-controller-manager     │ v1.34.2            │ sha256:1b3491 │ 20.7MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-095481 image ls --format table --alsologtostderr:
I1212 00:06:40.367892   41856 out.go:360] Setting OutFile to fd 1 ...
I1212 00:06:40.368102   41856 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:06:40.368127   41856 out.go:374] Setting ErrFile to fd 2...
I1212 00:06:40.368176   41856 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:06:40.368479   41856 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
I1212 00:06:40.369227   41856 config.go:182] Loaded profile config "functional-095481": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1212 00:06:40.369407   41856 config.go:182] Loaded profile config "functional-095481": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1212 00:06:40.370031   41856 cli_runner.go:164] Run: docker container inspect functional-095481 --format={{.State.Status}}
I1212 00:06:40.395872   41856 ssh_runner.go:195] Run: systemctl --version
I1212 00:06:40.395929   41856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-095481
I1212 00:06:40.419547   41856 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-095481/id_rsa Username:docker}
I1212 00:06:40.538756   41856 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-095481 image ls --format json --alsologtostderr:
[{"id":"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"21136588"},{"id":"sha256:2af6a2f60c44ae40a2b1bc226758dd0a3c3f1c0d22fd7d74035513945443e825","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-095481"],"size":"992"},{"id":"sha256:10afed3caf3eed1b711b8fa0a9600a7b488a45653a15a598a47ac570c1204cc4","repoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"22985759"},{"id":"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"20392204"},{"id":"sha256:1b34917560f0916ad0
d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"20718696"},{"id":"sha256:94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786","repoDigests":["registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"22802260"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c
5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"24559643"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/p
ause:latest"],"size":"71300"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-095481"],"size":"2173567"},{"id":"sha256:4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"15775785"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:a422e0e982356f6
c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-095481 image ls --format json --alsologtostderr:
I1212 00:06:40.388323   41852 out.go:360] Setting OutFile to fd 1 ...
I1212 00:06:40.388429   41852 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:06:40.388440   41852 out.go:374] Setting ErrFile to fd 2...
I1212 00:06:40.388445   41852 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:06:40.388731   41852 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
I1212 00:06:40.389362   41852 config.go:182] Loaded profile config "functional-095481": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1212 00:06:40.389483   41852 config.go:182] Loaded profile config "functional-095481": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1212 00:06:40.391613   41852 cli_runner.go:164] Run: docker container inspect functional-095481 --format={{.State.Status}}
I1212 00:06:40.416853   41852 ssh_runner.go:195] Run: systemctl --version
I1212 00:06:40.416927   41852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-095481
I1212 00:06:40.437453   41852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-095481/id_rsa Username:docker}
I1212 00:06:40.544125   41852 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-095481 image ls --format yaml --alsologtostderr:
- id: sha256:94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786
repoDigests:
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "22802260"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:2af6a2f60c44ae40a2b1bc226758dd0a3c3f1c0d22fd7d74035513945443e825
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-095481
size: "992"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "24559643"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "20392204"
- id: sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "21136588"
- id: sha256:4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "15775785"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-095481
size: "2173567"
- id: sha256:10afed3caf3eed1b711b8fa0a9600a7b488a45653a15a598a47ac570c1204cc4
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "22985759"
- id: sha256:1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "20718696"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-095481 image ls --format yaml --alsologtostderr:
I1212 00:06:40.073821   41778 out.go:360] Setting OutFile to fd 1 ...
I1212 00:06:40.074014   41778 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:06:40.074026   41778 out.go:374] Setting ErrFile to fd 2...
I1212 00:06:40.074032   41778 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:06:40.074312   41778 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
I1212 00:06:40.074954   41778 config.go:182] Loaded profile config "functional-095481": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1212 00:06:40.075134   41778 config.go:182] Loaded profile config "functional-095481": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1212 00:06:40.075672   41778 cli_runner.go:164] Run: docker container inspect functional-095481 --format={{.State.Status}}
I1212 00:06:40.105529   41778 ssh_runner.go:195] Run: systemctl --version
I1212 00:06:40.105588   41778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-095481
I1212 00:06:40.140206   41778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-095481/id_rsa Username:docker}
I1212 00:06:40.254496   41778 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-095481 ssh pgrep buildkitd: exit status 1 (278.378388ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 image build -t localhost/my-image:functional-095481 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-095481 image build -t localhost/my-image:functional-095481 testdata/build --alsologtostderr: (3.246089302s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-095481 image build -t localhost/my-image:functional-095481 testdata/build --alsologtostderr:
I1212 00:06:40.916811   41988 out.go:360] Setting OutFile to fd 1 ...
I1212 00:06:40.917058   41988 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:06:40.917087   41988 out.go:374] Setting ErrFile to fd 2...
I1212 00:06:40.917106   41988 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:06:40.917373   41988 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
I1212 00:06:40.918049   41988 config.go:182] Loaded profile config "functional-095481": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1212 00:06:40.919998   41988 config.go:182] Loaded profile config "functional-095481": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1212 00:06:40.920753   41988 cli_runner.go:164] Run: docker container inspect functional-095481 --format={{.State.Status}}
I1212 00:06:40.938849   41988 ssh_runner.go:195] Run: systemctl --version
I1212 00:06:40.938928   41988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-095481
I1212 00:06:40.955974   41988 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-095481/id_rsa Username:docker}
I1212 00:06:41.061447   41988 build_images.go:162] Building image from path: /tmp/build.4279558710.tar
I1212 00:06:41.061517   41988 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1212 00:06:41.069447   41988 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4279558710.tar
I1212 00:06:41.073026   41988 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4279558710.tar: stat -c "%s %y" /var/lib/minikube/build/build.4279558710.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4279558710.tar': No such file or directory
I1212 00:06:41.073057   41988 ssh_runner.go:362] scp /tmp/build.4279558710.tar --> /var/lib/minikube/build/build.4279558710.tar (3072 bytes)
I1212 00:06:41.093766   41988 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4279558710
I1212 00:06:41.102206   41988 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4279558710 -xf /var/lib/minikube/build/build.4279558710.tar
I1212 00:06:41.110623   41988 containerd.go:394] Building image: /var/lib/minikube/build/build.4279558710
I1212 00:06:41.110693   41988 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.4279558710 --local dockerfile=/var/lib/minikube/build/build.4279558710 --output type=image,name=localhost/my-image:functional-095481
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:e5d15a33f961663cf6e83635f3e6c0ca97c415f5972dc3b5fa71f7610e4de88b
#8 exporting manifest sha256:e5d15a33f961663cf6e83635f3e6c0ca97c415f5972dc3b5fa71f7610e4de88b 0.0s done
#8 exporting config sha256:76d1ac5c65002b5ac32c00014ab6edcafd1eeb1dc4d2faa41073c73bface6948 0.0s done
#8 naming to localhost/my-image:functional-095481 done
#8 DONE 0.1s
I1212 00:06:44.078230   41988 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.4279558710 --local dockerfile=/var/lib/minikube/build/build.4279558710 --output type=image,name=localhost/my-image:functional-095481: (2.967509779s)
I1212 00:06:44.078303   41988 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4279558710
I1212 00:06:44.090504   41988 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4279558710.tar
I1212 00:06:44.101082   41988 build_images.go:218] Built localhost/my-image:functional-095481 from /tmp/build.4279558710.tar
I1212 00:06:44.101118   41988 build_images.go:134] succeeded building to: functional-095481
I1212 00:06:44.101124   41988 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-095481
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 image load --daemon kicbase/echo-server:functional-095481 --alsologtostderr
2025/12/12 00:06:34 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-095481 image load --daemon kicbase/echo-server:functional-095481 --alsologtostderr: (1.0112814s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 image load --daemon kicbase/echo-server:functional-095481 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-arm64 -p functional-095481 image load --daemon kicbase/echo-server:functional-095481 --alsologtostderr: (1.025138635s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-095481
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 image load --daemon kicbase/echo-server:functional-095481 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 image save kicbase/echo-server:functional-095481 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 image rm kicbase/echo-server:functional-095481 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-095481
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-095481 image save --daemon kicbase/echo-server:functional-095481 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-095481
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.40s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-095481
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-095481
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-095481
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/test/nested/copy/4290/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-767012 cache add registry.k8s.io/pause:3.1: (1.162668999s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-767012 cache add registry.k8s.io/pause:3.3: (1.172102565s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-767012 cache add registry.k8s.io/pause:latest: (1.108072221s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-767012 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach1072211245/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 cache add minikube-local-cache-test:functional-767012
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 cache delete minikube-local-cache-test:functional-767012
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-767012
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.88s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-767012 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (282.839195ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.88s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.97s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 logs
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.97s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs3942569721/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-767012 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs3942569721/001/logs.txt: (1.015535139s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-767012 config get cpus: exit status 14 (80.355182ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-767012 config get cpus: exit status 14 (80.192217ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-767012 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-767012 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 23 (192.66784ms)

                                                
                                                
-- stdout --
	* [functional-767012] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22101
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:35:48.230424   71309 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:35:48.230584   71309 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:35:48.230610   71309 out.go:374] Setting ErrFile to fd 2...
	I1212 00:35:48.230628   71309 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:35:48.230947   71309 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	I1212 00:35:48.231387   71309 out.go:368] Setting JSON to false
	I1212 00:35:48.232237   71309 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4695,"bootTime":1765495054,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1212 00:35:48.232306   71309 start.go:143] virtualization:  
	I1212 00:35:48.237658   71309 out.go:179] * [functional-767012] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 00:35:48.240621   71309 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:35:48.240773   71309 notify.go:221] Checking for updates...
	I1212 00:35:48.247054   71309 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:35:48.250094   71309 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 00:35:48.252996   71309 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	I1212 00:35:48.255944   71309 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:35:48.258953   71309 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:35:48.262461   71309 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 00:35:48.263060   71309 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:35:48.288753   71309 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 00:35:48.288885   71309 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:35:48.352537   71309 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 00:35:48.341824978 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:35:48.352654   71309 docker.go:319] overlay module found
	I1212 00:35:48.355732   71309 out.go:179] * Using the docker driver based on existing profile
	I1212 00:35:48.359426   71309 start.go:309] selected driver: docker
	I1212 00:35:48.359448   71309 start.go:927] validating driver "docker" against &{Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:35:48.359546   71309 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:35:48.362927   71309 out.go:203] 
	W1212 00:35:48.365660   71309 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1212 00:35:48.368551   71309 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-767012 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-767012 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-767012 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 23 (204.276335ms)

                                                
                                                
-- stdout --
	* [functional-767012] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22101
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:35:48.029499   71260 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:35:48.029731   71260 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:35:48.029764   71260 out.go:374] Setting ErrFile to fd 2...
	I1212 00:35:48.029785   71260 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:35:48.030219   71260 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	I1212 00:35:48.030672   71260 out.go:368] Setting JSON to false
	I1212 00:35:48.031613   71260 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4694,"bootTime":1765495054,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1212 00:35:48.031730   71260 start.go:143] virtualization:  
	I1212 00:35:48.035355   71260 out.go:179] * [functional-767012] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1212 00:35:48.039235   71260 notify.go:221] Checking for updates...
	I1212 00:35:48.042386   71260 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 00:35:48.045460   71260 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:35:48.048465   71260 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 00:35:48.051481   71260 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	I1212 00:35:48.054442   71260 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 00:35:48.057464   71260 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:35:48.061378   71260 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 00:35:48.062020   71260 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 00:35:48.086938   71260 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 00:35:48.087133   71260 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:35:48.158136   71260 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 00:35:48.147281305 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:35:48.158265   71260 docker.go:319] overlay module found
	I1212 00:35:48.163086   71260 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1212 00:35:48.165916   71260 start.go:309] selected driver: docker
	I1212 00:35:48.165940   71260 start.go:927] validating driver "docker" against &{Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:35:48.166045   71260 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:35:48.169704   71260 out.go:203] 
	W1212 00:35:48.172615   71260 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1212 00:35:48.175455   71260 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 ssh -n functional-767012 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 cp functional-767012:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp1022785893/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 ssh -n functional-767012 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 ssh -n functional-767012 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/4290/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 ssh "sudo cat /etc/test/nested/copy/4290/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/4290.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 ssh "sudo cat /etc/ssl/certs/4290.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/4290.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 ssh "sudo cat /usr/share/ca-certificates/4290.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/42902.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 ssh "sudo cat /etc/ssl/certs/42902.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/42902.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 ssh "sudo cat /usr/share/ca-certificates/42902.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-767012 ssh "sudo systemctl is-active docker": exit status 1 (268.90821ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-767012 ssh "sudo systemctl is-active crio": exit status 1 (279.913106ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-767012 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-767012 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "356.293427ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "52.143255ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "317.979422ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "48.287497ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-767012 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3942718542/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-767012 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (381.601852ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 00:35:41.372855    4290 retry.go:31] will retry after 485.571389ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-767012 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3942718542/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-767012 ssh "sudo umount -f /mount-9p": exit status 1 (274.286883ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-767012 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-767012 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3942718542/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-767012 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo322877521/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-767012 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo322877521/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-767012 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo322877521/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-767012 ssh "findmnt -T" /mount1: exit status 1 (594.764726ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 00:35:43.540645    4290 retry.go:31] will retry after 439.747719ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-767012 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-767012 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo322877521/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-767012 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo322877521/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-767012 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo322877521/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.6s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.60s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-767012 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-767012
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-767012
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-767012 image ls --format short --alsologtostderr:
I1212 00:36:01.059931   73481 out.go:360] Setting OutFile to fd 1 ...
I1212 00:36:01.060045   73481 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:36:01.060056   73481 out.go:374] Setting ErrFile to fd 2...
I1212 00:36:01.060065   73481 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:36:01.060303   73481 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
I1212 00:36:01.060906   73481 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1212 00:36:01.061029   73481 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1212 00:36:01.061586   73481 cli_runner.go:164] Run: docker container inspect functional-767012 --format={{.State.Status}}
I1212 00:36:01.080100   73481 ssh_runner.go:195] Run: systemctl --version
I1212 00:36:01.080165   73481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
I1212 00:36:01.097135   73481 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
I1212 00:36:01.201908   73481 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-767012 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/minikube-local-cache-test │ functional-767012  │ sha256:2af6a2 │ 992B   │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
│ registry.k8s.io/kube-apiserver              │ v1.35.0-beta.0     │ sha256:ccd634 │ 24.7MB │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ localhost/my-image                          │ functional-767012  │ sha256:291797 │ 831kB  │
│ registry.k8s.io/coredns/coredns             │ v1.13.1            │ sha256:e08f4d │ 21.2MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0            │ sha256:2c5f0d │ 21.1MB │
│ registry.k8s.io/kube-proxy                  │ v1.35.0-beta.0     │ sha256:404c2e │ 22.4MB │
│ docker.io/kicbase/echo-server               │ functional-767012  │ sha256:ce2d2c │ 2.17MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
│ registry.k8s.io/kube-controller-manager     │ v1.35.0-beta.0     │ sha256:68b5f7 │ 20.7MB │
│ registry.k8s.io/kube-scheduler              │ v1.35.0-beta.0     │ sha256:163787 │ 15.4MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-767012 image ls --format table --alsologtostderr:
I1212 00:36:05.164190   73881 out.go:360] Setting OutFile to fd 1 ...
I1212 00:36:05.164312   73881 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:36:05.164317   73881 out.go:374] Setting ErrFile to fd 2...
I1212 00:36:05.164322   73881 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:36:05.164732   73881 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
I1212 00:36:05.166050   73881 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1212 00:36:05.166260   73881 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1212 00:36:05.166850   73881 cli_runner.go:164] Run: docker container inspect functional-767012 --format={{.State.Status}}
I1212 00:36:05.184114   73881 ssh_runner.go:195] Run: systemctl --version
I1212 00:36:05.184165   73881 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
I1212 00:36:05.201330   73881 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
I1212 00:36:05.305722   73881 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-767012 image ls --format json --alsologtostderr:
[{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:29179774bb553e6bbead5da7f0ea2f255bf02b6fc404c1c7cebcea17a3ffcc75","repoDigests":[],"repoTags":["localhost/my-image:functional-767012"],"size":"830617"},{"id":"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904","repoDigests":["registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"22429671"},{"id":"sha256
:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-767012"],"size":"2173567"},{"id":"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"21136588"},{"id":"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"24678359"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:
8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:2af6a2f60c44ae40a2b1bc226758dd0a3c3f1c0d22fd7d74035513945443e825","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-767012"],"size":"992"},{"id":"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"21168808"},{"id":"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"20661043"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@
sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"15391364"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-767012 image ls --format json --alsologtostderr:
I1212 00:36:04.921950   73837 out.go:360] Setting OutFile to fd 1 ...
I1212 00:36:04.922166   73837 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:36:04.922197   73837 out.go:374] Setting ErrFile to fd 2...
I1212 00:36:04.922219   73837 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:36:04.922502   73837 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
I1212 00:36:04.923196   73837 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1212 00:36:04.923368   73837 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1212 00:36:04.923920   73837 cli_runner.go:164] Run: docker container inspect functional-767012 --format={{.State.Status}}
I1212 00:36:04.942179   73837 ssh_runner.go:195] Run: systemctl --version
I1212 00:36:04.942230   73837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
I1212 00:36:04.960034   73837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
I1212 00:36:05.066767   73837 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-767012 image ls --format yaml --alsologtostderr:
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:2af6a2f60c44ae40a2b1bc226758dd0a3c3f1c0d22fd7d74035513945443e825
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-767012
size: "992"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "21136588"
- id: sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "24678359"
- id: sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "15391364"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "21168808"
- id: sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "20661043"
- id: sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "22429671"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-767012
size: "2173567"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-767012 image ls --format yaml --alsologtostderr:
I1212 00:36:01.285021   73518 out.go:360] Setting OutFile to fd 1 ...
I1212 00:36:01.286333   73518 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:36:01.286345   73518 out.go:374] Setting ErrFile to fd 2...
I1212 00:36:01.286351   73518 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:36:01.286629   73518 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
I1212 00:36:01.287315   73518 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1212 00:36:01.287432   73518 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1212 00:36:01.287940   73518 cli_runner.go:164] Run: docker container inspect functional-767012 --format={{.State.Status}}
I1212 00:36:01.310351   73518 ssh_runner.go:195] Run: systemctl --version
I1212 00:36:01.310405   73518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
I1212 00:36:01.331597   73518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
I1212 00:36:01.433639   73518 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-767012 ssh pgrep buildkitd: exit status 1 (268.672406ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 image build -t localhost/my-image:functional-767012 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-767012 image build -t localhost/my-image:functional-767012 testdata/build --alsologtostderr: (2.908784451s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-767012 image build -t localhost/my-image:functional-767012 testdata/build --alsologtostderr:
I1212 00:36:01.785202   73621 out.go:360] Setting OutFile to fd 1 ...
I1212 00:36:01.785478   73621 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:36:01.785495   73621 out.go:374] Setting ErrFile to fd 2...
I1212 00:36:01.785501   73621 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:36:01.786058   73621 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
I1212 00:36:01.786744   73621 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1212 00:36:01.787411   73621 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1212 00:36:01.787977   73621 cli_runner.go:164] Run: docker container inspect functional-767012 --format={{.State.Status}}
I1212 00:36:01.805399   73621 ssh_runner.go:195] Run: systemctl --version
I1212 00:36:01.805459   73621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
I1212 00:36:01.822515   73621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
I1212 00:36:01.925613   73621 build_images.go:162] Building image from path: /tmp/build.1208739292.tar
I1212 00:36:01.925706   73621 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1212 00:36:01.933401   73621 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1208739292.tar
I1212 00:36:01.936937   73621 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1208739292.tar: stat -c "%s %y" /var/lib/minikube/build/build.1208739292.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1208739292.tar': No such file or directory
I1212 00:36:01.936968   73621 ssh_runner.go:362] scp /tmp/build.1208739292.tar --> /var/lib/minikube/build/build.1208739292.tar (3072 bytes)
I1212 00:36:01.954341   73621 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1208739292
I1212 00:36:01.962145   73621 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1208739292 -xf /var/lib/minikube/build/build.1208739292.tar
I1212 00:36:01.970031   73621 containerd.go:394] Building image: /var/lib/minikube/build/build.1208739292
I1212 00:36:01.970126   73621 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1208739292 --local dockerfile=/var/lib/minikube/build/build.1208739292 --output type=image,name=localhost/my-image:functional-767012
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B 0.0s done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:4df2b5d64445b8c415d89ad19a5a8ee632cdf856c093089b49c6a1536ffd4897 0.0s done
#8 exporting config sha256:29179774bb553e6bbead5da7f0ea2f255bf02b6fc404c1c7cebcea17a3ffcc75 0.0s done
#8 naming to localhost/my-image:functional-767012 done
#8 DONE 0.2s
I1212 00:36:04.621764   73621 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1208739292 --local dockerfile=/var/lib/minikube/build/build.1208739292 --output type=image,name=localhost/my-image:functional-767012: (2.651596508s)
I1212 00:36:04.621839   73621 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1208739292
I1212 00:36:04.629760   73621 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1208739292.tar
I1212 00:36:04.637683   73621 build_images.go:218] Built localhost/my-image:functional-767012 from /tmp/build.1208739292.tar
I1212 00:36:04.637715   73621 build_images.go:134] succeeded building to: functional-767012
I1212 00:36:04.637720   73621 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-767012
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 image load --daemon kicbase/echo-server:functional-767012 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 image load --daemon kicbase/echo-server:functional-767012 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-767012
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 image load --daemon kicbase/echo-server:functional-767012 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 image save kicbase/echo-server:functional-767012 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 image rm kicbase/echo-server:functional-767012 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-767012
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 image save --daemon kicbase/echo-server:functional-767012 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-767012
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-767012 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-767012
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-767012
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-767012
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (180.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1212 00:38:41.615514    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:38:52.051771    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:38:52.058106    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:38:52.069451    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:38:52.090796    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:38:52.132173    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:38:52.213524    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:38:52.375017    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:38:52.696633    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:38:53.338109    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:38:54.619743    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:38:57.181294    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:39:02.303244    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:39:12.545288    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:39:33.026615    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:40:13.988662    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-543419 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (2m59.299482308s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (180.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 kubectl -- rollout status deployment/busybox
E1212 00:40:57.042737    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-543419 kubectl -- rollout status deployment/busybox: (5.5727305s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 kubectl -- exec busybox-7b57f96db7-6mjj5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 kubectl -- exec busybox-7b57f96db7-kzjd9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 kubectl -- exec busybox-7b57f96db7-tsxr7 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 kubectl -- exec busybox-7b57f96db7-6mjj5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 kubectl -- exec busybox-7b57f96db7-kzjd9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 kubectl -- exec busybox-7b57f96db7-tsxr7 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 kubectl -- exec busybox-7b57f96db7-6mjj5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 kubectl -- exec busybox-7b57f96db7-kzjd9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 kubectl -- exec busybox-7b57f96db7-tsxr7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 kubectl -- exec busybox-7b57f96db7-6mjj5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 kubectl -- exec busybox-7b57f96db7-6mjj5 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 kubectl -- exec busybox-7b57f96db7-kzjd9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 kubectl -- exec busybox-7b57f96db7-kzjd9 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 kubectl -- exec busybox-7b57f96db7-tsxr7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 kubectl -- exec busybox-7b57f96db7-tsxr7 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 node add --alsologtostderr -v 5
E1212 00:41:35.910074    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-543419 node add --alsologtostderr -v 5: (58.635229186s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-543419 status --alsologtostderr -v 5: (1.016958576s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-543419 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.100367316s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-543419 status --output json --alsologtostderr -v 5: (1.048372409s)
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 cp testdata/cp-test.txt ha-543419:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 ssh -n ha-543419 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 cp ha-543419:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2845834773/001/cp-test_ha-543419.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 ssh -n ha-543419 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 cp ha-543419:/home/docker/cp-test.txt ha-543419-m02:/home/docker/cp-test_ha-543419_ha-543419-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 ssh -n ha-543419 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 ssh -n ha-543419-m02 "sudo cat /home/docker/cp-test_ha-543419_ha-543419-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 cp ha-543419:/home/docker/cp-test.txt ha-543419-m03:/home/docker/cp-test_ha-543419_ha-543419-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 ssh -n ha-543419 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 ssh -n ha-543419-m03 "sudo cat /home/docker/cp-test_ha-543419_ha-543419-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 cp ha-543419:/home/docker/cp-test.txt ha-543419-m04:/home/docker/cp-test_ha-543419_ha-543419-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 ssh -n ha-543419 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 ssh -n ha-543419-m04 "sudo cat /home/docker/cp-test_ha-543419_ha-543419-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 cp testdata/cp-test.txt ha-543419-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 ssh -n ha-543419-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 cp ha-543419-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2845834773/001/cp-test_ha-543419-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 ssh -n ha-543419-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 cp ha-543419-m02:/home/docker/cp-test.txt ha-543419:/home/docker/cp-test_ha-543419-m02_ha-543419.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 ssh -n ha-543419-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 ssh -n ha-543419 "sudo cat /home/docker/cp-test_ha-543419-m02_ha-543419.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 cp ha-543419-m02:/home/docker/cp-test.txt ha-543419-m03:/home/docker/cp-test_ha-543419-m02_ha-543419-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 ssh -n ha-543419-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 ssh -n ha-543419-m03 "sudo cat /home/docker/cp-test_ha-543419-m02_ha-543419-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 cp ha-543419-m02:/home/docker/cp-test.txt ha-543419-m04:/home/docker/cp-test_ha-543419-m02_ha-543419-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 ssh -n ha-543419-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 ssh -n ha-543419-m04 "sudo cat /home/docker/cp-test_ha-543419-m02_ha-543419-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 cp testdata/cp-test.txt ha-543419-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 ssh -n ha-543419-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 cp ha-543419-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2845834773/001/cp-test_ha-543419-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 ssh -n ha-543419-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 cp ha-543419-m03:/home/docker/cp-test.txt ha-543419:/home/docker/cp-test_ha-543419-m03_ha-543419.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 ssh -n ha-543419-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 ssh -n ha-543419 "sudo cat /home/docker/cp-test_ha-543419-m03_ha-543419.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 cp ha-543419-m03:/home/docker/cp-test.txt ha-543419-m02:/home/docker/cp-test_ha-543419-m03_ha-543419-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 ssh -n ha-543419-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 ssh -n ha-543419-m02 "sudo cat /home/docker/cp-test_ha-543419-m03_ha-543419-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 cp ha-543419-m03:/home/docker/cp-test.txt ha-543419-m04:/home/docker/cp-test_ha-543419-m03_ha-543419-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 ssh -n ha-543419-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 ssh -n ha-543419-m04 "sudo cat /home/docker/cp-test_ha-543419-m03_ha-543419-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 cp testdata/cp-test.txt ha-543419-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 ssh -n ha-543419-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 cp ha-543419-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2845834773/001/cp-test_ha-543419-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 ssh -n ha-543419-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 cp ha-543419-m04:/home/docker/cp-test.txt ha-543419:/home/docker/cp-test_ha-543419-m04_ha-543419.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 ssh -n ha-543419-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 ssh -n ha-543419 "sudo cat /home/docker/cp-test_ha-543419-m04_ha-543419.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 cp ha-543419-m04:/home/docker/cp-test.txt ha-543419-m02:/home/docker/cp-test_ha-543419-m04_ha-543419-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 ssh -n ha-543419-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 ssh -n ha-543419-m02 "sudo cat /home/docker/cp-test_ha-543419-m04_ha-543419-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 cp ha-543419-m04:/home/docker/cp-test.txt ha-543419-m03:/home/docker/cp-test_ha-543419-m04_ha-543419-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 ssh -n ha-543419-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 ssh -n ha-543419-m03 "sudo cat /home/docker/cp-test_ha-543419-m04_ha-543419-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-543419 node stop m02 --alsologtostderr -v 5: (12.228112463s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-543419 status --alsologtostderr -v 5: exit status 7 (803.471592ms)

                                                
                                                
-- stdout --
	ha-543419
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-543419-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-543419-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-543419-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:42:39.755430   91355 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:42:39.755698   91355 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:42:39.755729   91355 out.go:374] Setting ErrFile to fd 2...
	I1212 00:42:39.755766   91355 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:42:39.756041   91355 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	I1212 00:42:39.756257   91355 out.go:368] Setting JSON to false
	I1212 00:42:39.756318   91355 mustload.go:66] Loading cluster: ha-543419
	I1212 00:42:39.756407   91355 notify.go:221] Checking for updates...
	I1212 00:42:39.756797   91355 config.go:182] Loaded profile config "ha-543419": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1212 00:42:39.756835   91355 status.go:174] checking status of ha-543419 ...
	I1212 00:42:39.757934   91355 cli_runner.go:164] Run: docker container inspect ha-543419 --format={{.State.Status}}
	I1212 00:42:39.780732   91355 status.go:371] ha-543419 host status = "Running" (err=<nil>)
	I1212 00:42:39.780754   91355 host.go:66] Checking if "ha-543419" exists ...
	I1212 00:42:39.781062   91355 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-543419
	I1212 00:42:39.809188   91355 host.go:66] Checking if "ha-543419" exists ...
	I1212 00:42:39.809500   91355 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:42:39.809543   91355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-543419
	I1212 00:42:39.835224   91355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/ha-543419/id_rsa Username:docker}
	I1212 00:42:39.941256   91355 ssh_runner.go:195] Run: systemctl --version
	I1212 00:42:39.949587   91355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:42:39.964930   91355 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:42:40.036371   91355 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-12 00:42:40.022777751 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:42:40.037135   91355 kubeconfig.go:125] found "ha-543419" server: "https://192.168.49.254:8443"
	I1212 00:42:40.037174   91355 api_server.go:166] Checking apiserver status ...
	I1212 00:42:40.037239   91355 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:42:40.053272   91355 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1437/cgroup
	I1212 00:42:40.064103   91355 api_server.go:182] apiserver freezer: "7:freezer:/docker/3b1f38f5fd2eefd505b40bc8ac6dc7a75311be636f4348d22ab8aaffa4f4c746/kubepods/burstable/pod6b0b632e10045be2923a650f293d90e5/d4940521b01b8888cc6dccd71111acc04ac5d6a441f48261dc35734c962cf0d5"
	I1212 00:42:40.064179   91355 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/3b1f38f5fd2eefd505b40bc8ac6dc7a75311be636f4348d22ab8aaffa4f4c746/kubepods/burstable/pod6b0b632e10045be2923a650f293d90e5/d4940521b01b8888cc6dccd71111acc04ac5d6a441f48261dc35734c962cf0d5/freezer.state
	I1212 00:42:40.072512   91355 api_server.go:204] freezer state: "THAWED"
	I1212 00:42:40.072542   91355 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1212 00:42:40.081267   91355 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1212 00:42:40.081303   91355 status.go:463] ha-543419 apiserver status = Running (err=<nil>)
	I1212 00:42:40.081313   91355 status.go:176] ha-543419 status: &{Name:ha-543419 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 00:42:40.081329   91355 status.go:174] checking status of ha-543419-m02 ...
	I1212 00:42:40.081653   91355 cli_runner.go:164] Run: docker container inspect ha-543419-m02 --format={{.State.Status}}
	I1212 00:42:40.100431   91355 status.go:371] ha-543419-m02 host status = "Stopped" (err=<nil>)
	I1212 00:42:40.100454   91355 status.go:384] host is not running, skipping remaining checks
	I1212 00:42:40.100461   91355 status.go:176] ha-543419-m02 status: &{Name:ha-543419-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 00:42:40.100481   91355 status.go:174] checking status of ha-543419-m03 ...
	I1212 00:42:40.100807   91355 cli_runner.go:164] Run: docker container inspect ha-543419-m03 --format={{.State.Status}}
	I1212 00:42:40.120667   91355 status.go:371] ha-543419-m03 host status = "Running" (err=<nil>)
	I1212 00:42:40.120694   91355 host.go:66] Checking if "ha-543419-m03" exists ...
	I1212 00:42:40.121038   91355 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-543419-m03
	I1212 00:42:40.138752   91355 host.go:66] Checking if "ha-543419-m03" exists ...
	I1212 00:42:40.139172   91355 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:42:40.139218   91355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-543419-m03
	I1212 00:42:40.156687   91355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/ha-543419-m03/id_rsa Username:docker}
	I1212 00:42:40.260985   91355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:42:40.274914   91355 kubeconfig.go:125] found "ha-543419" server: "https://192.168.49.254:8443"
	I1212 00:42:40.274949   91355 api_server.go:166] Checking apiserver status ...
	I1212 00:42:40.275053   91355 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:42:40.287308   91355 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1383/cgroup
	I1212 00:42:40.296204   91355 api_server.go:182] apiserver freezer: "7:freezer:/docker/d29983a5096b42e95b027f9922ee7cb68414049dddae59dbfc3b4dcb3c780d39/kubepods/burstable/podd546c2553e67314c0b7fe0226ef3385f/9ed0b62e84432e7e5560a3765968e084ba7f24a008975eca8299f27d56b3d18e"
	I1212 00:42:40.296279   91355 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d29983a5096b42e95b027f9922ee7cb68414049dddae59dbfc3b4dcb3c780d39/kubepods/burstable/podd546c2553e67314c0b7fe0226ef3385f/9ed0b62e84432e7e5560a3765968e084ba7f24a008975eca8299f27d56b3d18e/freezer.state
	I1212 00:42:40.304853   91355 api_server.go:204] freezer state: "THAWED"
	I1212 00:42:40.304929   91355 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1212 00:42:40.313084   91355 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1212 00:42:40.313119   91355 status.go:463] ha-543419-m03 apiserver status = Running (err=<nil>)
	I1212 00:42:40.313130   91355 status.go:176] ha-543419-m03 status: &{Name:ha-543419-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 00:42:40.313147   91355 status.go:174] checking status of ha-543419-m04 ...
	I1212 00:42:40.313469   91355 cli_runner.go:164] Run: docker container inspect ha-543419-m04 --format={{.State.Status}}
	I1212 00:42:40.334568   91355 status.go:371] ha-543419-m04 host status = "Running" (err=<nil>)
	I1212 00:42:40.334590   91355 host.go:66] Checking if "ha-543419-m04" exists ...
	I1212 00:42:40.334908   91355 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-543419-m04
	I1212 00:42:40.353168   91355 host.go:66] Checking if "ha-543419-m04" exists ...
	I1212 00:42:40.353490   91355 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:42:40.353540   91355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-543419-m04
	I1212 00:42:40.371590   91355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/ha-543419-m04/id_rsa Username:docker}
	I1212 00:42:40.480632   91355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:42:40.494300   91355 status.go:176] ha-543419-m04 status: &{Name:ha-543419-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (14.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-543419 node start m02 --alsologtostderr -v 5: (12.415258155s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-543419 status --alsologtostderr -v 5: (1.462860223s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (14.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.192732661s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (97.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-543419 stop --alsologtostderr -v 5: (37.416164432s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 start --wait true --alsologtostderr -v 5
E1212 00:43:41.615256    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:43:52.051778    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:44:00.114176    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:44:19.751344    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-543419 start --wait true --alsologtostderr -v 5: (1m0.222647611s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (97.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-543419 node delete m03 --alsologtostderr -v 5: (10.244154518s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-543419 stop --alsologtostderr -v 5: (36.83362853s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-543419 status --alsologtostderr -v 5: exit status 7 (110.944609ms)

                                                
                                                
-- stdout --
	ha-543419
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-543419-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-543419-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:45:23.404008  106292 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:45:23.404134  106292 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:45:23.404144  106292 out.go:374] Setting ErrFile to fd 2...
	I1212 00:45:23.404149  106292 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:45:23.404410  106292 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	I1212 00:45:23.404586  106292 out.go:368] Setting JSON to false
	I1212 00:45:23.404617  106292 mustload.go:66] Loading cluster: ha-543419
	I1212 00:45:23.404671  106292 notify.go:221] Checking for updates...
	I1212 00:45:23.405039  106292 config.go:182] Loaded profile config "ha-543419": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1212 00:45:23.405065  106292 status.go:174] checking status of ha-543419 ...
	I1212 00:45:23.405636  106292 cli_runner.go:164] Run: docker container inspect ha-543419 --format={{.State.Status}}
	I1212 00:45:23.424290  106292 status.go:371] ha-543419 host status = "Stopped" (err=<nil>)
	I1212 00:45:23.424316  106292 status.go:384] host is not running, skipping remaining checks
	I1212 00:45:23.424323  106292 status.go:176] ha-543419 status: &{Name:ha-543419 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 00:45:23.424353  106292 status.go:174] checking status of ha-543419-m02 ...
	I1212 00:45:23.424662  106292 cli_runner.go:164] Run: docker container inspect ha-543419-m02 --format={{.State.Status}}
	I1212 00:45:23.444713  106292 status.go:371] ha-543419-m02 host status = "Stopped" (err=<nil>)
	I1212 00:45:23.444739  106292 status.go:384] host is not running, skipping remaining checks
	I1212 00:45:23.444752  106292 status.go:176] ha-543419-m02 status: &{Name:ha-543419-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 00:45:23.444770  106292 status.go:174] checking status of ha-543419-m04 ...
	I1212 00:45:23.445040  106292 cli_runner.go:164] Run: docker container inspect ha-543419-m04 --format={{.State.Status}}
	I1212 00:45:23.467931  106292 status.go:371] ha-543419-m04 host status = "Stopped" (err=<nil>)
	I1212 00:45:23.467955  106292 status.go:384] host is not running, skipping remaining checks
	I1212 00:45:23.467962  106292 status.go:176] ha-543419-m04 status: &{Name:ha-543419-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (59.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1212 00:45:57.042394    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-543419 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (58.263246474s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (59.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (48.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-543419 node add --control-plane --alsologtostderr -v 5: (47.865625673s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-543419 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-543419 status --alsologtostderr -v 5: (1.045397105s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (48.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.030841115s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.03s)

                                                
                                    
x
+
TestJSONOutput/start/Command (77.2s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-892550 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-892550 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (1m17.192990544s)
--- PASS: TestJSONOutput/start/Command (77.20s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-892550 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-892550 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.99s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-892550 --output=json --user=testUser
E1212 00:48:41.614908    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-892550 --output=json --user=testUser: (5.986938317s)
--- PASS: TestJSONOutput/stop/Command (5.99s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.26s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-172980 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-172980 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (106.253883ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cff770a9-629d-4c82-bb9e-d1d75085b763","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-172980] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2548d6f5-e898-412e-90da-ef32e540f200","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22101"}}
	{"specversion":"1.0","id":"6c67cff5-245f-4874-a413-9b9e6870a753","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fc6ea9d8-8f44-4c08-a8fc-770450d97c8e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig"}}
	{"specversion":"1.0","id":"11eeb277-e0f1-4f7a-88e4-439ee4597d62","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube"}}
	{"specversion":"1.0","id":"b3247d48-29fe-4d68-abf6-ffe5e4ef4b97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"774e3ab4-7c32-419b-a14e-7d1ce042fe83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"bfd1ea71-81e5-46d8-afeb-c6d521e335d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-172980" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-172980
E1212 00:48:52.051722    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestErrorJSONOutput (0.26s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (41.75s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-573365 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-573365 --network=: (39.495045428s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-573365" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-573365
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-573365: (2.22710806s)
--- PASS: TestKicCustomNetwork/create_custom_network (41.75s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.99s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-887044 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-887044 --network=bridge: (34.83621442s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-887044" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-887044
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-887044: (2.124294779s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.99s)

                                                
                                    
x
+
TestKicExistingNetwork (28.98s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1212 00:50:10.921117    4290 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1212 00:50:10.937024    4290 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1212 00:50:10.937106    4290 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1212 00:50:10.937123    4290 cli_runner.go:164] Run: docker network inspect existing-network
W1212 00:50:10.952937    4290 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1212 00:50:10.952972    4290 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1212 00:50:10.952986    4290 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1212 00:50:10.953088    4290 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1212 00:50:10.969703    4290 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4cd687b06342 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:a2:e8:c8:87:d3:0a} reservation:<nil>}
I1212 00:50:10.969966    4290 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40024e2e60}
I1212 00:50:10.969986    4290 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1212 00:50:10.970035    4290 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1212 00:50:11.036985    4290 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-010883 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-010883 --network=existing-network: (26.729069256s)
helpers_test.go:176: Cleaning up "existing-network-010883" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-010883
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-010883: (2.107187056s)
I1212 00:50:39.889176    4290 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (28.98s)

                                                
                                    
x
+
TestKicCustomSubnet (37.29s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-594273 --subnet=192.168.60.0/24
E1212 00:50:57.044712    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-594273 --subnet=192.168.60.0/24: (35.077488046s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-594273 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-594273" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-594273
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-594273: (2.193118775s)
--- PASS: TestKicCustomSubnet (37.29s)

                                                
                                    
x
+
TestKicStaticIP (35.37s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-206285 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-206285 --static-ip=192.168.200.200: (32.919120998s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-206285 ip
helpers_test.go:176: Cleaning up "static-ip-206285" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-206285
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-206285: (2.264092271s)
--- PASS: TestKicStaticIP (35.37s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (73.31s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-214620 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-214620 --driver=docker  --container-runtime=containerd: (32.910724695s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-217369 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-217369 --driver=docker  --container-runtime=containerd: (34.118755803s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-214620
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-217369
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-217369" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-217369
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-217369: (2.127984488s)
helpers_test.go:176: Cleaning up "first-214620" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-214620
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-214620: (2.348101247s)
--- PASS: TestMinikubeProfile (73.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.83s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-070785 --memory=3072 --mount-string /tmp/TestMountStartserial3991558630/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-070785 --memory=3072 --mount-string /tmp/TestMountStartserial3991558630/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.834314929s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-070785 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.43s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-072639 --memory=3072 --mount-string /tmp/TestMountStartserial3991558630/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-072639 --memory=3072 --mount-string /tmp/TestMountStartserial3991558630/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.432799132s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-072639 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-070785 --alsologtostderr -v=5
E1212 00:53:24.691500    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-070785 --alsologtostderr -v=5: (1.734064509s)
--- PASS: TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-072639 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-072639
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-072639: (1.280355048s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.75s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-072639
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-072639: (6.750847593s)
--- PASS: TestMountStart/serial/RestartStopped (7.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-072639 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (136.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-134406 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1212 00:53:41.616623    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:53:52.051917    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:55:15.112790    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-134406 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m16.416995153s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-134406 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (136.95s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-134406 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-134406 -- rollout status deployment/busybox
E1212 00:55:57.043560    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-134406 -- rollout status deployment/busybox: (3.27789725s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-134406 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-134406 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-134406 -- exec busybox-7b57f96db7-7xtmv -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-134406 -- exec busybox-7b57f96db7-h9zf8 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-134406 -- exec busybox-7b57f96db7-7xtmv -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-134406 -- exec busybox-7b57f96db7-h9zf8 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-134406 -- exec busybox-7b57f96db7-7xtmv -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-134406 -- exec busybox-7b57f96db7-h9zf8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.14s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-134406 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-134406 -- exec busybox-7b57f96db7-7xtmv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-134406 -- exec busybox-7b57f96db7-7xtmv -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-134406 -- exec busybox-7b57f96db7-h9zf8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-134406 -- exec busybox-7b57f96db7-h9zf8 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (29.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-134406 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-134406 -v=5 --alsologtostderr: (28.408490072s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-134406 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (29.14s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-134406 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-134406 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-134406 cp testdata/cp-test.txt multinode-134406:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-134406 ssh -n multinode-134406 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-134406 cp multinode-134406:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2626566793/001/cp-test_multinode-134406.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-134406 ssh -n multinode-134406 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-134406 cp multinode-134406:/home/docker/cp-test.txt multinode-134406-m02:/home/docker/cp-test_multinode-134406_multinode-134406-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-134406 ssh -n multinode-134406 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-134406 ssh -n multinode-134406-m02 "sudo cat /home/docker/cp-test_multinode-134406_multinode-134406-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-134406 cp multinode-134406:/home/docker/cp-test.txt multinode-134406-m03:/home/docker/cp-test_multinode-134406_multinode-134406-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-134406 ssh -n multinode-134406 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-134406 ssh -n multinode-134406-m03 "sudo cat /home/docker/cp-test_multinode-134406_multinode-134406-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-134406 cp testdata/cp-test.txt multinode-134406-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-134406 ssh -n multinode-134406-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-134406 cp multinode-134406-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2626566793/001/cp-test_multinode-134406-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-134406 ssh -n multinode-134406-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-134406 cp multinode-134406-m02:/home/docker/cp-test.txt multinode-134406:/home/docker/cp-test_multinode-134406-m02_multinode-134406.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-134406 ssh -n multinode-134406-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-134406 ssh -n multinode-134406 "sudo cat /home/docker/cp-test_multinode-134406-m02_multinode-134406.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-134406 cp multinode-134406-m02:/home/docker/cp-test.txt multinode-134406-m03:/home/docker/cp-test_multinode-134406-m02_multinode-134406-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-134406 ssh -n multinode-134406-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-134406 ssh -n multinode-134406-m03 "sudo cat /home/docker/cp-test_multinode-134406-m02_multinode-134406-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-134406 cp testdata/cp-test.txt multinode-134406-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-134406 ssh -n multinode-134406-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-134406 cp multinode-134406-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2626566793/001/cp-test_multinode-134406-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-134406 ssh -n multinode-134406-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-134406 cp multinode-134406-m03:/home/docker/cp-test.txt multinode-134406:/home/docker/cp-test_multinode-134406-m03_multinode-134406.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-134406 ssh -n multinode-134406-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-134406 ssh -n multinode-134406 "sudo cat /home/docker/cp-test_multinode-134406-m03_multinode-134406.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-134406 cp multinode-134406-m03:/home/docker/cp-test.txt multinode-134406-m02:/home/docker/cp-test_multinode-134406-m03_multinode-134406-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-134406 ssh -n multinode-134406-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-134406 ssh -n multinode-134406-m02 "sudo cat /home/docker/cp-test_multinode-134406-m03_multinode-134406-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.57s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-134406 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-134406 node stop m03: (1.322546235s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-134406 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-134406 status: exit status 7 (624.289784ms)

                                                
                                                
-- stdout --
	multinode-134406
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-134406-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-134406-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-134406 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-134406 status --alsologtostderr: exit status 7 (543.791807ms)

                                                
                                                
-- stdout --
	multinode-134406
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-134406-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-134406-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:56:42.509023  159678 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:56:42.509195  159678 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:56:42.509226  159678 out.go:374] Setting ErrFile to fd 2...
	I1212 00:56:42.509246  159678 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:56:42.509621  159678 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	I1212 00:56:42.509895  159678 out.go:368] Setting JSON to false
	I1212 00:56:42.509953  159678 mustload.go:66] Loading cluster: multinode-134406
	I1212 00:56:42.510750  159678 config.go:182] Loaded profile config "multinode-134406": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1212 00:56:42.510817  159678 status.go:174] checking status of multinode-134406 ...
	I1212 00:56:42.511756  159678 notify.go:221] Checking for updates...
	I1212 00:56:42.512105  159678 cli_runner.go:164] Run: docker container inspect multinode-134406 --format={{.State.Status}}
	I1212 00:56:42.533207  159678 status.go:371] multinode-134406 host status = "Running" (err=<nil>)
	I1212 00:56:42.533230  159678 host.go:66] Checking if "multinode-134406" exists ...
	I1212 00:56:42.533546  159678 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-134406
	I1212 00:56:42.554543  159678 host.go:66] Checking if "multinode-134406" exists ...
	I1212 00:56:42.554850  159678 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:56:42.554903  159678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-134406
	I1212 00:56:42.576697  159678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/multinode-134406/id_rsa Username:docker}
	I1212 00:56:42.681384  159678 ssh_runner.go:195] Run: systemctl --version
	I1212 00:56:42.687995  159678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:56:42.701253  159678 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 00:56:42.765669  159678 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-12 00:56:42.755871636 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 00:56:42.766231  159678 kubeconfig.go:125] found "multinode-134406" server: "https://192.168.67.2:8443"
	I1212 00:56:42.766271  159678 api_server.go:166] Checking apiserver status ...
	I1212 00:56:42.766329  159678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:56:42.779065  159678 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1353/cgroup
	I1212 00:56:42.787528  159678 api_server.go:182] apiserver freezer: "7:freezer:/docker/619cf0c77e7802b21c3319e9a468e1e0932035dab08999b0541d754787c1a199/kubepods/burstable/pode47d035a8f4fa712c596eb29ba377ac4/df3249726d8a7bba0ee7c1b9b8962bd3bfe63fb9756627af8f97aa4f37a4ec1f"
	I1212 00:56:42.787613  159678 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/619cf0c77e7802b21c3319e9a468e1e0932035dab08999b0541d754787c1a199/kubepods/burstable/pode47d035a8f4fa712c596eb29ba377ac4/df3249726d8a7bba0ee7c1b9b8962bd3bfe63fb9756627af8f97aa4f37a4ec1f/freezer.state
	I1212 00:56:42.795665  159678 api_server.go:204] freezer state: "THAWED"
	I1212 00:56:42.795698  159678 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1212 00:56:42.803876  159678 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1212 00:56:42.803907  159678 status.go:463] multinode-134406 apiserver status = Running (err=<nil>)
	I1212 00:56:42.803918  159678 status.go:176] multinode-134406 status: &{Name:multinode-134406 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 00:56:42.803936  159678 status.go:174] checking status of multinode-134406-m02 ...
	I1212 00:56:42.804260  159678 cli_runner.go:164] Run: docker container inspect multinode-134406-m02 --format={{.State.Status}}
	I1212 00:56:42.823042  159678 status.go:371] multinode-134406-m02 host status = "Running" (err=<nil>)
	I1212 00:56:42.823069  159678 host.go:66] Checking if "multinode-134406-m02" exists ...
	I1212 00:56:42.823384  159678 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-134406-m02
	I1212 00:56:42.840922  159678 host.go:66] Checking if "multinode-134406-m02" exists ...
	I1212 00:56:42.841251  159678 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:56:42.841295  159678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-134406-m02
	I1212 00:56:42.860898  159678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32918 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/multinode-134406-m02/id_rsa Username:docker}
	I1212 00:56:42.968340  159678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:56:42.981212  159678 status.go:176] multinode-134406-m02 status: &{Name:multinode-134406-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1212 00:56:42.981247  159678 status.go:174] checking status of multinode-134406-m03 ...
	I1212 00:56:42.981548  159678 cli_runner.go:164] Run: docker container inspect multinode-134406-m03 --format={{.State.Status}}
	I1212 00:56:42.998804  159678 status.go:371] multinode-134406-m03 host status = "Stopped" (err=<nil>)
	I1212 00:56:42.998827  159678 status.go:384] host is not running, skipping remaining checks
	I1212 00:56:42.998834  159678 status.go:176] multinode-134406-m03 status: &{Name:multinode-134406-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.49s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-134406 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-134406 node start m03 -v=5 --alsologtostderr: (6.920696062s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-134406 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.73s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (72.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-134406
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-134406
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-134406: (25.165303223s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-134406 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-134406 --wait=true -v=5 --alsologtostderr: (47.444298508s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-134406
--- PASS: TestMultiNode/serial/RestartKeepsNodes (72.73s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-134406 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-134406 node delete m03: (4.938946454s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-134406 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.62s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-134406 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-134406 stop: (23.882425082s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-134406 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-134406 status: exit status 7 (103.975618ms)

                                                
                                                
-- stdout --
	multinode-134406
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-134406-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-134406 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-134406 status --alsologtostderr: exit status 7 (98.183103ms)

                                                
                                                
-- stdout --
	multinode-134406
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-134406-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:58:33.131260  168471 out.go:360] Setting OutFile to fd 1 ...
	I1212 00:58:33.131881  168471 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:58:33.132092  168471 out.go:374] Setting ErrFile to fd 2...
	I1212 00:58:33.132113  168471 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 00:58:33.132460  168471 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	I1212 00:58:33.132711  168471 out.go:368] Setting JSON to false
	I1212 00:58:33.132777  168471 mustload.go:66] Loading cluster: multinode-134406
	I1212 00:58:33.132867  168471 notify.go:221] Checking for updates...
	I1212 00:58:33.133267  168471 config.go:182] Loaded profile config "multinode-134406": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1212 00:58:33.133317  168471 status.go:174] checking status of multinode-134406 ...
	I1212 00:58:33.133838  168471 cli_runner.go:164] Run: docker container inspect multinode-134406 --format={{.State.Status}}
	I1212 00:58:33.152506  168471 status.go:371] multinode-134406 host status = "Stopped" (err=<nil>)
	I1212 00:58:33.152526  168471 status.go:384] host is not running, skipping remaining checks
	I1212 00:58:33.152533  168471 status.go:176] multinode-134406 status: &{Name:multinode-134406 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 00:58:33.152556  168471 status.go:174] checking status of multinode-134406-m02 ...
	I1212 00:58:33.152868  168471 cli_runner.go:164] Run: docker container inspect multinode-134406-m02 --format={{.State.Status}}
	I1212 00:58:33.179496  168471 status.go:371] multinode-134406-m02 host status = "Stopped" (err=<nil>)
	I1212 00:58:33.179523  168471 status.go:384] host is not running, skipping remaining checks
	I1212 00:58:33.179531  168471 status.go:176] multinode-134406-m02 status: &{Name:multinode-134406-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.08s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-134406 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1212 00:58:41.615348    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:58:52.051522    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-134406 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (49.629451306s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-134406 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (50.33s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-134406
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-134406-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-134406-m02 --driver=docker  --container-runtime=containerd: exit status 14 (91.142168ms)

                                                
                                                
-- stdout --
	* [multinode-134406-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22101
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-134406-m02' is duplicated with machine name 'multinode-134406-m02' in profile 'multinode-134406'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-134406-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-134406-m03 --driver=docker  --container-runtime=containerd: (31.664710837s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-134406
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-134406: exit status 80 (344.879979ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-134406 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-134406-m03 already exists in multinode-134406-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-134406-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-134406-m03: (2.087885263s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.25s)

                                                
                                    
x
+
TestPreload (138.71s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-748391 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd
E1212 01:00:40.116239    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:00:57.043060    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-748391 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd: (1m17.791197541s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-748391 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 -p test-preload-748391 image pull gcr.io/k8s-minikube/busybox: (2.351947573s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-748391
preload_test.go:55: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-748391: (5.915791216s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-748391 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-748391 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (50.055741027s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-748391 image list
helpers_test.go:176: Cleaning up "test-preload-748391" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-748391
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-748391: (2.363150178s)
--- PASS: TestPreload (138.71s)

                                                
                                    
x
+
TestScheduledStopUnix (109.24s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-928237 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-928237 --memory=3072 --driver=docker  --container-runtime=containerd: (33.439397465s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-928237 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1212 01:02:55.001975  184364 out.go:360] Setting OutFile to fd 1 ...
	I1212 01:02:55.002090  184364 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:02:55.002101  184364 out.go:374] Setting ErrFile to fd 2...
	I1212 01:02:55.002108  184364 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:02:55.002368  184364 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	I1212 01:02:55.002624  184364 out.go:368] Setting JSON to false
	I1212 01:02:55.002775  184364 mustload.go:66] Loading cluster: scheduled-stop-928237
	I1212 01:02:55.003217  184364 config.go:182] Loaded profile config "scheduled-stop-928237": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1212 01:02:55.003335  184364 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/scheduled-stop-928237/config.json ...
	I1212 01:02:55.003593  184364 mustload.go:66] Loading cluster: scheduled-stop-928237
	I1212 01:02:55.003754  184364 config.go:182] Loaded profile config "scheduled-stop-928237": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-928237 -n scheduled-stop-928237
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-928237 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1212 01:02:55.451580  184456 out.go:360] Setting OutFile to fd 1 ...
	I1212 01:02:55.451779  184456 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:02:55.451822  184456 out.go:374] Setting ErrFile to fd 2...
	I1212 01:02:55.451843  184456 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:02:55.452300  184456 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	I1212 01:02:55.452654  184456 out.go:368] Setting JSON to false
	I1212 01:02:55.452854  184456 daemonize_unix.go:73] killing process 184381 as it is an old scheduled stop
	I1212 01:02:55.452935  184456 mustload.go:66] Loading cluster: scheduled-stop-928237
	I1212 01:02:55.453638  184456 config.go:182] Loaded profile config "scheduled-stop-928237": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1212 01:02:55.453724  184456 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/scheduled-stop-928237/config.json ...
	I1212 01:02:55.453912  184456 mustload.go:66] Loading cluster: scheduled-stop-928237
	I1212 01:02:55.454041  184456 config.go:182] Loaded profile config "scheduled-stop-928237": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1212 01:02:55.460483    4290 retry.go:31] will retry after 82.574µs: open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/scheduled-stop-928237/pid: no such file or directory
I1212 01:02:55.460993    4290 retry.go:31] will retry after 122.83µs: open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/scheduled-stop-928237/pid: no such file or directory
I1212 01:02:55.462083    4290 retry.go:31] will retry after 271.967µs: open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/scheduled-stop-928237/pid: no such file or directory
I1212 01:02:55.463199    4290 retry.go:31] will retry after 245.964µs: open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/scheduled-stop-928237/pid: no such file or directory
I1212 01:02:55.464279    4290 retry.go:31] will retry after 271.618µs: open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/scheduled-stop-928237/pid: no such file or directory
I1212 01:02:55.465386    4290 retry.go:31] will retry after 513.137µs: open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/scheduled-stop-928237/pid: no such file or directory
I1212 01:02:55.466496    4290 retry.go:31] will retry after 970.187µs: open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/scheduled-stop-928237/pid: no such file or directory
I1212 01:02:55.467571    4290 retry.go:31] will retry after 870.953µs: open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/scheduled-stop-928237/pid: no such file or directory
I1212 01:02:55.468671    4290 retry.go:31] will retry after 2.966162ms: open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/scheduled-stop-928237/pid: no such file or directory
I1212 01:02:55.471841    4290 retry.go:31] will retry after 2.547641ms: open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/scheduled-stop-928237/pid: no such file or directory
I1212 01:02:55.475062    4290 retry.go:31] will retry after 8.391692ms: open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/scheduled-stop-928237/pid: no such file or directory
I1212 01:02:55.484273    4290 retry.go:31] will retry after 12.424885ms: open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/scheduled-stop-928237/pid: no such file or directory
I1212 01:02:55.497505    4290 retry.go:31] will retry after 17.985367ms: open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/scheduled-stop-928237/pid: no such file or directory
I1212 01:02:55.515772    4290 retry.go:31] will retry after 15.074705ms: open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/scheduled-stop-928237/pid: no such file or directory
I1212 01:02:55.531333    4290 retry.go:31] will retry after 38.737042ms: open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/scheduled-stop-928237/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-928237 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-928237 -n scheduled-stop-928237
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-928237
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-928237 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1212 01:03:21.401438  185129 out.go:360] Setting OutFile to fd 1 ...
	I1212 01:03:21.401619  185129 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:03:21.401647  185129 out.go:374] Setting ErrFile to fd 2...
	I1212 01:03:21.401670  185129 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:03:21.401941  185129 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	I1212 01:03:21.402227  185129 out.go:368] Setting JSON to false
	I1212 01:03:21.402364  185129 mustload.go:66] Loading cluster: scheduled-stop-928237
	I1212 01:03:21.402775  185129 config.go:182] Loaded profile config "scheduled-stop-928237": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1212 01:03:21.402879  185129 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/scheduled-stop-928237/config.json ...
	I1212 01:03:21.403132  185129 mustload.go:66] Loading cluster: scheduled-stop-928237
	I1212 01:03:21.403288  185129 config.go:182] Loaded profile config "scheduled-stop-928237": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1212 01:03:41.617480    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:03:52.051789    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-928237
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-928237: exit status 7 (76.644614ms)

                                                
                                                
-- stdout --
	scheduled-stop-928237
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-928237 -n scheduled-stop-928237
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-928237 -n scheduled-stop-928237: exit status 7 (66.4808ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-928237" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-928237
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-928237: (4.184883405s)
--- PASS: TestScheduledStopUnix (109.24s)

                                                
                                    
x
+
TestInsufficientStorage (12.62s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-298467 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-298467 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.081992682s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"75dc7fa3-086a-430d-9969-ef684ba314eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-298467] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f5de56f4-df18-4e0a-ac62-28feed3562e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22101"}}
	{"specversion":"1.0","id":"b6d21311-8869-4c58-b39e-625a18953109","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c78333d9-b80e-467f-9ed2-41d13cdf423e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig"}}
	{"specversion":"1.0","id":"f6e02ef1-2d1f-4270-a049-3e06a9a8d759","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube"}}
	{"specversion":"1.0","id":"81c26a04-4aba-4972-986a-792a108adfba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"936cab98-fc90-4c1b-9a4e-ee2ce841f3fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6218a50a-5a82-44d9-b2ef-4053ed5f9bda","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"7281ea50-0e19-46d9-80a4-a34d0285108d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"cbe17632-c8d8-4edc-9900-440acb827b6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"cde6cdf6-d1d6-4644-913c-d126e6a33870","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"4ac63c92-24a9-4b2e-80a3-dddf732a545c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-298467\" primary control-plane node in \"insufficient-storage-298467\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f549fae9-9ef8-4ac8-b961-d3462fe2156b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765275396-22083 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"8cdbd9f5-bc97-45fe-8df1-941f89217596","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"d0e2cb41-f619-484e-a0b4-a9d6d3fe357a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-298467 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-298467 --output=json --layout=cluster: exit status 7 (298.703159ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-298467","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-298467","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 01:04:21.109368  186960 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-298467" does not appear in /home/jenkins/minikube-integration/22101-2343/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-298467 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-298467 --output=json --layout=cluster: exit status 7 (290.636962ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-298467","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-298467","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 01:04:21.401049  187028 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-298467" does not appear in /home/jenkins/minikube-integration/22101-2343/kubeconfig
	E1212 01:04:21.410943  187028 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/insufficient-storage-298467/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-298467" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-298467
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-298467: (1.951243008s)
--- PASS: TestInsufficientStorage (12.62s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (61.83s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.2431252125 start -p running-upgrade-615639 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.2431252125 start -p running-upgrade-615639 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (31.777023749s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-615639 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1212 01:13:41.615673    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:13:52.051999    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-615639 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (26.641969599s)
helpers_test.go:176: Cleaning up "running-upgrade-615639" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-615639
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-615639: (2.329049694s)
--- PASS: TestRunningBinaryUpgrade (61.83s)

                                                
                                    
x
+
TestMissingContainerUpgrade (125.78s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.3739319411 start -p missing-upgrade-650464 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.3739319411 start -p missing-upgrade-650464 --memory=3072 --driver=docker  --container-runtime=containerd: (1m6.525998959s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-650464
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-650464
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-650464 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-650464 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (55.010974325s)
helpers_test.go:176: Cleaning up "missing-upgrade-650464" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-650464
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-650464: (2.088015387s)
--- PASS: TestMissingContainerUpgrade (125.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-881266 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-881266 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (111.543142ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-881266] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22101
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestPause/serial/Start (91.96s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-861131 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-861131 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m31.963738689s)
--- PASS: TestPause/serial/Start (91.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (44.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-881266 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-881266 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (43.514875724s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-881266 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (44.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (23.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-881266 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-881266 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (21.430467788s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-881266 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-881266 status -o json: exit status 2 (330.45537ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-881266","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-881266
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-881266: (2.031921362s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (23.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-881266 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-881266 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (7.795444497s)
--- PASS: TestNoKubernetes/serial/Start (7.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22101-2343/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-881266 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-881266 "sudo systemctl is-active --quiet service kubelet": exit status 1 (303.745511ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-881266
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-881266: (1.308082773s)
--- PASS: TestNoKubernetes/serial/Stop (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-881266 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-881266 --driver=docker  --container-runtime=containerd: (6.862586946s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-881266 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-881266 "sudo systemctl is-active --quiet service kubelet": exit status 1 (281.847799ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (9.07s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-861131 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1212 01:05:57.042935    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-861131 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (9.041960695s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (9.07s)

                                                
                                    
x
+
TestPause/serial/Pause (0.87s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-861131 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.87s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.42s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-861131 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-861131 --output=json --layout=cluster: exit status 2 (421.548486ms)

                                                
                                                
-- stdout --
	{"Name":"pause-861131","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-861131","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.42s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.83s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-861131 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.83s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.16s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-861131 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-861131 --alsologtostderr -v=5: (1.162870862s)
--- PASS: TestPause/serial/PauseAgain (1.16s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (4.52s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-861131 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-861131 --alsologtostderr -v=5: (4.518675701s)
--- PASS: TestPause/serial/DeletePaused (4.52s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.17s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-861131
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-861131: exit status 1 (23.370742ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-861131: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (304.06s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.1681302317 start -p stopped-upgrade-461553 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.1681302317 start -p stopped-upgrade-461553 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (34.686061607s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.1681302317 -p stopped-upgrade-461553 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.1681302317 -p stopped-upgrade-461553 stop: (1.276226111s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-461553 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1212 01:08:41.615384    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:08:52.051676    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:10:04.693013    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:10:57.043138    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 01:11:55.114588    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-461553 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m28.10207066s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (304.06s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-461553
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-461553: (1.986336345s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-341847 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-341847 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (195.107921ms)

                                                
                                                
-- stdout --
	* [false-341847] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22101
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 01:14:48.619144  237873 out.go:360] Setting OutFile to fd 1 ...
	I1212 01:14:48.619291  237873 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:14:48.619316  237873 out.go:374] Setting ErrFile to fd 2...
	I1212 01:14:48.619337  237873 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 01:14:48.619627  237873 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
	I1212 01:14:48.620086  237873 out.go:368] Setting JSON to false
	I1212 01:14:48.621010  237873 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7035,"bootTime":1765495054,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1212 01:14:48.621091  237873 start.go:143] virtualization:  
	I1212 01:14:48.624736  237873 out.go:179] * [false-341847] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 01:14:48.628477  237873 out.go:179]   - MINIKUBE_LOCATION=22101
	I1212 01:14:48.628697  237873 notify.go:221] Checking for updates...
	I1212 01:14:48.634652  237873 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 01:14:48.637614  237873 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
	I1212 01:14:48.640409  237873 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
	I1212 01:14:48.643217  237873 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 01:14:48.646166  237873 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 01:14:48.649642  237873 config.go:182] Loaded profile config "kubernetes-upgrade-439215": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1212 01:14:48.649789  237873 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 01:14:48.679548  237873 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 01:14:48.679689  237873 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 01:14:48.738364  237873 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 01:14:48.7291743 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 01:14:48.738469  237873 docker.go:319] overlay module found
	I1212 01:14:48.741684  237873 out.go:179] * Using the docker driver based on user configuration
	I1212 01:14:48.744626  237873 start.go:309] selected driver: docker
	I1212 01:14:48.744645  237873 start.go:927] validating driver "docker" against <nil>
	I1212 01:14:48.744660  237873 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 01:14:48.748157  237873 out.go:203] 
	W1212 01:14:48.751328  237873 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1212 01:14:48.754145  237873 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-341847 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-341847

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-341847

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-341847

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-341847

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-341847

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-341847

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-341847

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-341847

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-341847

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-341847

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341847"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341847"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341847"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-341847

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341847"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341847"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-341847" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-341847" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-341847" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-341847" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-341847" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-341847" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-341847" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-341847" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341847"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341847"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341847"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341847"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341847"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-341847" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-341847" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-341847" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341847"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341847"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341847"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341847"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341847"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 12 Dec 2025 01:07:12 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-439215
contexts:
- context:
cluster: kubernetes-upgrade-439215
user: kubernetes-upgrade-439215
name: kubernetes-upgrade-439215
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-439215
user:
client-certificate: /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/kubernetes-upgrade-439215/client.crt
client-key: /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/kubernetes-upgrade-439215/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-341847

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341847"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341847"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341847"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341847"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341847"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341847"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341847"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341847"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341847"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341847"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341847"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341847"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341847"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341847"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341847"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341847"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341847"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341847"

                                                
                                                
----------------------- debugLogs end: false-341847 [took: 3.181912136s] --------------------------------
helpers_test.go:176: Cleaning up "false-341847" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p false-341847
--- PASS: TestNetworkPlugins/group/false (3.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (71.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-147581 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-147581 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (1m11.794020196s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (71.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-971096 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-971096 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (1m26.278442992s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-147581 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [51d7ec8e-d26c-473e-87d1-c14bbe3f12a1] Pending
helpers_test.go:353: "busybox" [51d7ec8e-d26c-473e-87d1-c14bbe3f12a1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [51d7ec8e-d26c-473e-87d1-c14bbe3f12a1] Running
E1212 01:20:57.043036    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003564282s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-147581 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-147581 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-147581 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.297696345s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-147581 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-147581 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-147581 --alsologtostderr -v=3: (12.170882591s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-147581 -n old-k8s-version-147581
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-147581 -n old-k8s-version-147581: exit status 7 (72.221551ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-147581 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (48.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-147581 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-147581 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (48.105424199s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-147581 -n old-k8s-version-147581
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (48.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-971096 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [c1a1a821-3369-4346-b382-87cd160f7cf5] Pending
helpers_test.go:353: "busybox" [c1a1a821-3369-4346-b382-87cd160f7cf5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [c1a1a821-3369-4346-b382-87cd160f7cf5] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003400801s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-971096 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-971096 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-971096 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.21679393s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-971096 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-971096 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-971096 --alsologtostderr -v=3: (12.633006445s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-971096 -n default-k8s-diff-port-971096
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-971096 -n default-k8s-diff-port-971096: exit status 7 (82.408228ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-971096 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-971096 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-971096 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (49.367016369s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-971096 -n default-k8s-diff-port-971096
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-5wcth" [16472174-214e-42e1-9c98-f23064cb91c6] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003454457s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-5wcth" [16472174-214e-42e1-9c98-f23064cb91c6] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003591489s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-147581 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-147581 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-147581 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-147581 -n old-k8s-version-147581
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-147581 -n old-k8s-version-147581: exit status 2 (354.613286ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-147581 -n old-k8s-version-147581
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-147581 -n old-k8s-version-147581: exit status 2 (326.948705ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-147581 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-147581 -n old-k8s-version-147581
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-147581 -n old-k8s-version-147581
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (78.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-648696 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-648696 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (1m18.898398594s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (78.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-nstks" [6c245d61-78c1-40f0-bc59-3c900a5598bd] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003422996s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-nstks" [6c245d61-78c1-40f0-bc59-3c900a5598bd] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003146775s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-971096 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-971096 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-971096 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-971096 --alsologtostderr -v=1: (1.184122638s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-971096 -n default-k8s-diff-port-971096
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-971096 -n default-k8s-diff-port-971096: exit status 2 (430.11988ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-971096 -n default-k8s-diff-port-971096
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-971096 -n default-k8s-diff-port-971096: exit status 2 (363.781205ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-971096 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-971096 -n default-k8s-diff-port-971096
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-971096 -n default-k8s-diff-port-971096
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-648696 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [575cd8ba-de0e-442b-83e5-f0cd9ba55822] Pending
helpers_test.go:353: "busybox" [575cd8ba-de0e-442b-83e5-f0cd9ba55822] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [575cd8ba-de0e-442b-83e5-f0cd9ba55822] Running
E1212 01:23:41.615069    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003884489s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-648696 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-648696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-648696 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-648696 --alsologtostderr -v=3
E1212 01:23:52.052264    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-648696 --alsologtostderr -v=3: (12.182030765s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-648696 -n embed-certs-648696
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-648696 -n embed-certs-648696: exit status 7 (75.985587ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-648696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (53.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-648696 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-648696 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (52.913410894s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-648696 -n embed-certs-648696
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (53.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-5hvdf" [bd8e9746-b13d-4781-9a6d-d23508bcc55b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002776134s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-5hvdf" [bd8e9746-b13d-4781-9a6d-d23508bcc55b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003445621s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-648696 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-648696 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-648696 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-648696 -n embed-certs-648696
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-648696 -n embed-certs-648696: exit status 2 (355.317243ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-648696 -n embed-certs-648696
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-648696 -n embed-certs-648696: exit status 2 (330.89353ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-648696 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-648696 -n embed-certs-648696
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-648696 -n embed-certs-648696
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-361053 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-361053 --alsologtostderr -v=3: (1.289179145s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-361053 -n no-preload-361053
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-361053 -n no-preload-361053: exit status 7 (72.139619ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-361053 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-256959 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-256959 --alsologtostderr -v=3: (1.324924943s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-256959 -n newest-cni-256959
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-256959 -n newest-cni-256959: exit status 7 (65.637938ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-256959 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-256959 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (79.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-341847 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-341847 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m19.644268778s)
--- PASS: TestNetworkPlugins/group/auto/Start (79.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-341847 "pgrep -a kubelet"
I1212 01:42:57.904284    4290 config.go:182] Loaded profile config "auto-341847": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-341847 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-25v8v" [bc4e23c2-a636-4d93-8918-182e7f90bb52] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-25v8v" [bc4e23c2-a636-4d93-8918-182e7f90bb52] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003978837s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-341847 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-341847 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-341847 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (78.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-341847 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-341847 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m18.870463664s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (78.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-5v4v9" [e3aed484-19b6-48b2-b8bb-1ec8cfdc2c97] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003677031s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-341847 "pgrep -a kubelet"
I1212 01:44:53.352838    4290 config.go:182] Loaded profile config "kindnet-341847": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-341847 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-8nfzq" [fa076555-b20a-4a90-925b-b7a03693ecdf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-8nfzq" [fa076555-b20a-4a90-925b-b7a03693ecdf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.002829074s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-341847 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-341847 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-341847 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (57.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-341847 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-341847 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (57.761700611s)
--- PASS: TestNetworkPlugins/group/calico/Start (57.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-4gszj" [4cf72cb2-9201-4cc3-b2ac-2383575c17b5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003999748s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-341847 "pgrep -a kubelet"
I1212 01:46:28.900796    4290 config.go:182] Loaded profile config "calico-341847": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-341847 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-fr4jc" [b3560210-294b-47f9-a679-911473c041f7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-fr4jc" [b3560210-294b-47f9-a679-911473c041f7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004590842s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-341847 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-341847 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-341847 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (55.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-341847 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-341847 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (55.24175404s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (55.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-341847 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-341847 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-8zfc9" [0e61e5d3-be1f-4ee5-9998-2a67c2d565f5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-8zfc9" [0e61e5d3-be1f-4ee5-9998-2a67c2d565f5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.003938471s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-341847 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-341847 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-341847 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (76.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-341847 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-341847 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m16.357766247s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (76.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-341847 "pgrep -a kubelet"
I1212 01:49:44.855913    4290 config.go:182] Loaded profile config "enable-default-cni-341847": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-341847 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-tdwtc" [666496fe-d5d8-4e7e-9fdd-9a9aea468588] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-tdwtc" [666496fe-d5d8-4e7e-9fdd-9a9aea468588] Running
E1212 01:49:48.333823    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/kindnet-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.004840571s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-341847 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-341847 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-341847 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (54.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-341847 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-341847 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (54.251508167s)
--- PASS: TestNetworkPlugins/group/flannel/Start (54.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-55kgb" [84b53367-2740-4023-bf8f-ef615adddacd] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.002727695s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-341847 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-341847 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-gbnvz" [27961de7-3e58-499b-a508-82109b8cac06] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-gbnvz" [27961de7-3e58-499b-a508-82109b8cac06] Running
E1212 01:51:18.648645    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/default-k8s-diff-port-971096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003802245s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-341847 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-341847 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-341847 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (71.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-341847 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-341847 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m11.865578143s)
--- PASS: TestNetworkPlugins/group/bridge/Start (71.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-341847 "pgrep -a kubelet"
I1212 01:52:58.643583    4290 config.go:182] Loaded profile config "bridge-341847": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-341847 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-vh8t6" [8a012e05-c797-486f-8e9e-a541a94eaf19] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1212 01:52:59.377970    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/custom-flannel-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-vh8t6" [8a012e05-c797-486f-8e9e-a541a94eaf19] Running
E1212 01:53:01.939803    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/custom-flannel-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.004031503s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-341847 exec deployment/netcat -- nslookup kubernetes.default
E1212 01:53:07.061777    4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/custom-flannel-341847/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-341847 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-341847 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (38/417)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0.42
31 TestOffline 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
112 TestFunctional/parallel/MySQL 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
132 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
261 TestGvisorAddon 0
283 TestImageBuild 0
284 TestISOImage 0
348 TestChangeNoneUser 0
351 TestScheduledStopWindows 0
353 TestSkaffold 0
388 TestStartStop/group/disable-driver-mounts 0.19
392 TestNetworkPlugins/group/kubenet 3.54
400 TestNetworkPlugins/group/cilium 3.75
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.42s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-471728 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-471728" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-471728
--- SKIP: TestDownloadOnlyKic (0.42s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-539158" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-539158
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-341847 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-341847

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-341847

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-341847

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-341847

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-341847

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-341847

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-341847

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-341847

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-341847

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-341847

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341847"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341847"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341847"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-341847

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341847"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341847"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-341847" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-341847" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-341847" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-341847" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-341847" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-341847" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-341847" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-341847" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341847"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341847"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341847"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341847"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341847"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-341847" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-341847" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-341847" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341847"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341847"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341847"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341847"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341847"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 12 Dec 2025 01:07:12 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-439215
contexts:
- context:
cluster: kubernetes-upgrade-439215
user: kubernetes-upgrade-439215
name: kubernetes-upgrade-439215
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-439215
user:
client-certificate: /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/kubernetes-upgrade-439215/client.crt
client-key: /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/kubernetes-upgrade-439215/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-341847

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341847"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341847"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341847"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341847"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341847"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341847"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341847"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341847"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341847"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341847"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341847"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341847"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341847"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341847"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341847"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341847"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341847"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341847"

                                                
                                                
----------------------- debugLogs end: kubenet-341847 [took: 3.381849874s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-341847" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-341847
--- SKIP: TestNetworkPlugins/group/kubenet (3.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-341847 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-341847

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-341847

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-341847

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-341847

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-341847

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-341847

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-341847

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-341847

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-341847

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-341847

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341847"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341847"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341847"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-341847

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341847"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341847"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-341847" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-341847" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-341847" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-341847" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-341847" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-341847" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-341847" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-341847" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341847"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341847"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341847"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341847"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341847"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-341847

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-341847

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-341847" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-341847" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-341847

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-341847

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-341847" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-341847" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-341847" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-341847" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-341847" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341847"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341847"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341847"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341847"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341847"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 12 Dec 2025 01:07:12 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-439215
contexts:
- context:
cluster: kubernetes-upgrade-439215
user: kubernetes-upgrade-439215
name: kubernetes-upgrade-439215
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-439215
user:
client-certificate: /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/kubernetes-upgrade-439215/client.crt
client-key: /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/kubernetes-upgrade-439215/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-341847

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341847"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341847"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341847"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341847"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341847"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341847"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341847"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341847"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341847"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341847"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341847"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341847"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341847"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341847"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341847"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341847"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341847"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-341847" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341847"

                                                
                                                
----------------------- debugLogs end: cilium-341847 [took: 3.599883807s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-341847" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-341847
--- SKIP: TestNetworkPlugins/group/cilium (3.75s)

                                                
                                    
Copied to clipboard